query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
d2dfa1f211432efc034679bb1662c5c5
Advances in Game Accessibility from 2005 to 2010
[ { "docid": "e4f62bc47ca11c5e4c7aff5937d90c88", "text": "CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.", "title": "" } ]
[ { "docid": "82daa2740da14a2508138ccb6e2e2554", "text": "In this paper, we introduce an Iterative Kalman Smoother (IKS) for tracking the 3D motion of a mobile device in real-time using visual and inertial measurements. In contrast to existing Extended Kalman Filter (EKF)-based approaches, smoothing can better approximate the underlying nonlinear system and measurement models by re-linearizing them. Additionally, by iteratively optimizing over all measurements available, the IKS increases the convergence rate of critical parameters (e.g., IMU-camera clock drift) and improves the positioning accuracy during challenging conditions (e.g., scarcity of visual features). Furthermore, and in contrast to existing inverse filters, the proposed IKS's numerical stability allows for efficient 32-bit implementations on resource-constrained devices, such as cell phones and wearables. We validate the IKS for performing vision-aided inertial navigation on Google Glass, a wearable device with limited sensing and processing, and demonstrate positioning accuracy comparable to that achieved on cell phones. To the best of our knowledge, this work presents the first proof-of-concept real-time 3D indoor localization system on a commercial-grade wearable computer.", "title": "" }, { "docid": "6bcd4a5e41d300e75d877de1b83e0a18", "text": "Medical training has traditionally depended on patient contact. However, changes in healthcare delivery coupled with concerns about lack of objectivity or standardization of clinical examinations lead to the introduction of the 'simulated patient' (SP). SPs are now used widely for teaching and assessment purposes. SPs are usually, but not necessarily, lay people who are trained to portray a patient with a specific condition in a realistic way, sometimes in a standardized way (where they give a consistent presentation which does not vary from student to student). SPs can be used for teaching and assessment of consultation and clinical/physical examination skills, in simulated teaching environments or in situ. All SPs play roles but SPs have also been used successfully to give feedback and evaluate student performance. Clearly, given this potential level of involvement in medical training, it is critical to recruit, train and use SPs appropriately. We have provided a detailed overview on how to do so, for both teaching and assessment purposes. The contents include: how to monitor and assess SP performance, both in terms of validity and reliability, and in terms of the impact on the SP; and an overview of the methods, staff costs and routine expenses required for recruiting, administrating and training an SP bank, and finally, we provide some intercultural comparisons, a 'snapshot' of the use of SPs in medical education across Europe and Asia, and briefly discuss some of the areas of SP use which require further research.", "title": "" }, { "docid": "20086cff7c26a1ae4d981fc512124f94", "text": "Commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems. Cloud computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work provides a comprehensive evaluation of EC2 cloud in different aspects. We first analyze the potentials of the cloud by evaluating the raw performance of different services of AWS such as compute, memory, network and I/O. Based on the findings on the raw performance, we then evaluate the performance of the scientific applications running in the cloud. Finally, we compare the performance of AWS with a private cloud, in order to find the root cause of its limitations while running scientific applications. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud in terms of both raw performance and scientific applications performance. Furthermore, we evaluate other services including S3, EBS and DynamoDB among many AWS services in order to assess the abilities of those to be used by scientific applications and frameworks. We also evaluate a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.", "title": "" }, { "docid": "3a52576a2fdaa7f6f9632dc8c4bf0971", "text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.", "title": "" }, { "docid": "57d1648391cac4ccfefd85aacef6b5ba", "text": "Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.", "title": "" }, { "docid": "b8fa50df3c76c2192c67cda7ae4d05f5", "text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.", "title": "" }, { "docid": "deed140862c62fa8be4a8a58ffc1d7dc", "text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "515edceed7d7bb8a3d2a40f8a9ef405e", "text": "BACKGROUND\nThe rate of bacterial meningitis declined by 55% in the United States in the early 1990s, when the Haemophilus influenzae type b (Hib) conjugate vaccine for infants was introduced. More recent prevention measures such as the pneumococcal conjugate vaccine and universal screening of pregnant women for group B streptococcus (GBS) have further changed the epidemiology of bacterial meningitis.\n\n\nMETHODS\nWe analyzed data on cases of bacterial meningitis reported among residents in eight surveillance areas of the Emerging Infections Programs Network, consisting of approximately 17.4 million persons, during 1998-2007. We defined bacterial meningitis as the presence of H. influenzae, Streptococcus pneumoniae, GBS, Listeria monocytogenes, or Neisseria meningitidis in cerebrospinal fluid or other normally sterile site in association with a clinical diagnosis of meningitis.\n\n\nRESULTS\nWe identified 3188 patients with bacterial meningitis; of 3155 patients for whom outcome data were available, 466 (14.8%) died. The incidence of meningitis changed by -31% (95% confidence interval [CI], -33 to -29) during the surveillance period, from 2.00 cases per 100,000 population (95% CI, 1.85 to 2.15) in 1998-1999 to 1.38 cases per 100,000 population (95% CI 1.27 to 1.50) in 2006-2007. The median age of patients increased from 30.3 years in 1998-1999 to 41.9 years in 2006-2007 (P<0.001 by the Wilcoxon rank-sum test). The case fatality rate did not change significantly: it was 15.7% in 1998-1999 and 14.3% in 2006-2007 (P=0.50). Of the 1670 cases reported during 2003-2007, S. pneumoniae was the predominant infective species (58.0%), followed by GBS (18.1%), N. meningitidis (13.9%), H. influenzae (6.7%), and L. monocytogenes (3.4%). An estimated 4100 cases and 500 deaths from bacterial meningitis occurred annually in the United States during 2003-2007.\n\n\nCONCLUSIONS\nThe rates of bacterial meningitis have decreased since 1998, but the disease still often results in death. With the success of pneumococcal and Hib conjugate vaccines in reducing the risk of meningitis among young children, the burden of bacterial meningitis is now borne more by older adults. (Funded by the Emerging Infections Programs, Centers for Disease Control and Prevention.).", "title": "" }, { "docid": "c7dd6824c8de3e988bb7f58141458ef9", "text": "We present a method to classify images into different categories of pornographic content to create a system for filtering pornographic images from network traffic. Although different systems for this application were presented in the past, most of these systems are based on simple skin colour features and have rather poor performance. Recent advances in the image recognition field in particular for the classification of objects have shown that bag-of-visual-words-approaches are a good method for many image classification problems. The system we present here, is based on this approach, uses a task-specific visual vocabulary and is trained and evaluated on an image database of 8500 images from different categories. It is shown that it clearly outperforms earlier systems on this dataset and further evaluation on two novel web-traffic collections shows the good performance of the proposed system.", "title": "" }, { "docid": "5656c77061a3f678172ea01e226ede26", "text": "BACKGROUND\nIn 2010, overweight and obesity were estimated to cause 3·4 million deaths, 3·9% of years of life lost, and 3·8% of disability-adjusted life-years (DALYs) worldwide. The rise in obesity has led to widespread calls for regular monitoring of changes in overweight and obesity prevalence in all populations. Comparable, up-to-date information about levels and trends is essential to quantify population health effects and to prompt decision makers to prioritise action. We estimate the global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013.\n\n\nMETHODS\nWe systematically identified surveys, reports, and published studies (n=1769) that included data for height and weight, both through physical measurements and self-reports. We used mixed effects linear regression to correct for bias in self-reports. We obtained data for prevalence of obesity and overweight by age, sex, country, and year (n=19,244) with a spatiotemporal Gaussian process regression model to estimate prevalence with 95% uncertainty intervals (UIs).\n\n\nFINDINGS\nWorldwide, the proportion of adults with a body-mass index (BMI) of 25 kg/m(2) or greater increased between 1980 and 2013 from 28·8% (95% UI 28·4-29·3) to 36·9% (36·3-37·4) in men, and from 29·8% (29·3-30·2) to 38·0% (37·5-38·5) in women. Prevalence has increased substantially in children and adolescents in developed countries; 23·8% (22·9-24·7) of boys and 22·6% (21·7-23·6) of girls were overweight or obese in 2013. The prevalence of overweight and obesity has also increased in children and adolescents in developing countries, from 8·1% (7·7-8·6) to 12·9% (12·3-13·5) in 2013 for boys and from 8·4% (8·1-8·8) to 13·4% (13·0-13·9) in girls. In adults, estimated prevalence of obesity exceeded 50% in men in Tonga and in women in Kuwait, Kiribati, Federated States of Micronesia, Libya, Qatar, Tonga, and Samoa. Since 2006, the increase in adult obesity in developed countries has slowed down.\n\n\nINTERPRETATION\nBecause of the established health risks and substantial increases in prevalence, obesity has become a major global health challenge. Not only is obesity increasing, but no national success stories have been reported in the past 33 years. Urgent global action and leadership is needed to help countries to more effectively intervene.\n\n\nFUNDING\nBill & Melinda Gates Foundation.", "title": "" }, { "docid": "4147fee030667122923f420ab55e38f7", "text": "In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.", "title": "" }, { "docid": "ca3ea61314d43abeac81546e66ff75e4", "text": "OBJECTIVE\nTo describe and discuss the process used to write a narrative review of the literature for publication in a peer-reviewed journal. Publication of narrative overviews of the literature should be standardized to increase their objectivity.\n\n\nBACKGROUND\nIn the past decade numerous changes in research methodology pertaining to reviews of the literature have occurred. These changes necessitate authors of review articles to be familiar with current standards in the publication process.\n\n\nMETHODS\nNarrative overview of the literature synthesizing the findings of literature retrieved from searches of computerized databases, hand searches, and authoritative texts.\n\n\nDISCUSSION\nAn overview of the use of three types of reviews of the literature is presented. Step by step instructions for how to conduct and write a narrative overview utilizing a 'best-evidence synthesis' approach are discussed, starting with appropriate preparatory work and ending with how to create proper illustrations. Several resources for creating reviews of the literature are presented and a narrative overview critical appraisal worksheet is included. A bibliography of other useful reading is presented in an appendix.\n\n\nCONCLUSION\nNarrative overviews can be a valuable contribution to the literature if prepared properly. New and experienced authors wishing to write a narrative overview should find this article useful in constructing such a paper and carrying out the research process. It is hoped that this article will stimulate scholarly dialog amongst colleagues about this research design and other complex literature review methods.", "title": "" }, { "docid": "d67a93dde102bdcd2dd1a72c80aacd6b", "text": "Network intrusion detection systems have become a standard component in security infrastructures. Unfortunately, current systems are poor at detecting novel attacks without an unacceptable level of false alarms. We propose that the solution to this problem is the application of an ensemble of data mining techniques which can be applied to network connection data in an offline environment, augmenting existing real-time sensors. In this paper, we expand on our motivation, particularly with regard to running in an offline environment, and our interest in multisensor and multimethod correlation. We then review existing systems, from commercial systems, to research based intrusion detection systems. Next we survey the state of the art in the area. Standard datasets and feature extraction turned out to be more important than we had initially anticipated, so each can be found under its own heading. Next, we review the actual data mining methods that have been proposed or implemented. We conclude by summarizing the open problems in this area and proposing a new research project to answer some of these open problems.", "title": "" }, { "docid": "3176f0a4824b2dd11d612d55b4421881", "text": "This article reviews some of the criticisms directed towards the eclectic paradigm of international production over the past decade, and restates its main tenets. The second part of the article considers a number of possible extensions of the paradigm and concludes by asserting that it remains \"a robust general framework for explaining and analysing not only the economic rationale of economic production but many organisational nd impact issues in relation to MNE activity as well.\"", "title": "" }, { "docid": "42167e7708bb73b08972e15a44a6df02", "text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "title": "" }, { "docid": "9019e71123230c6e2f58341d4912a0dd", "text": "How to effectively manage increasingly complex enterprise computing environments is one of the hardest challenges that most organizations have to face in the era of cloud computing, big data and IoT. Advanced automation and orchestration systems are the most valuable solutions helping IT staff to handle large-scale cloud data centers. Containers are the new revolution in the cloud computing world, they are more lightweight than VMs, and can radically decrease both the start up time of instances and the processing and storage overhead with respect to traditional VMs. The aim of this paper is to provide a comprehensive description of cloud orchestration approaches with containers, analyzing current research efforts, existing solutions and presenting issues and challenges facing this topic.", "title": "" }, { "docid": "6b203b7a8958103b30701ac139eb1fb8", "text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.", "title": "" }, { "docid": "8123ab525ce663e44b104db2cacd59a9", "text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.", "title": "" }, { "docid": "dbf5d0f6ce7161f55cf346e46150e8d7", "text": "Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "34af5ac483483fa59eda7804918bdb1c", "text": "Automatic spelling and grammatical correction systems are one of the most widely used tools within natural language applications. In this thesis, we assume the task of error correction as a type of monolingual machine translation where the source sentence is potentially erroneous and the target sentence should be the corrected form of the input. Our main focus in this project is building neural network models for the task of error correction. In particular, we investigate sequence-to-sequence and attention-based models which have recently shown a higher performance than the state-of-the-art of many language processing problems. We demonstrate that neural machine translation models can be successfully applied to the task of error correction. While the experiments of this research are performed on an Arabic corpus, our methods in this thesis can be easily applied to any language. Keywords— natural language error correction, recurrent neural networks, encoderdecoder models, attention mechanism", "title": "" } ]
scidocsrr
52cb1aabd581bc09562d69de103e864e
Refining faster-RCNN for accurate object detection
[ { "docid": "d88523afba42431989f5d3bd22f2ad85", "text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.", "title": "" } ]
[ { "docid": "c2f3cebd614fff668e80fa0d77e34972", "text": "In this paper, the unknown parameters of the photovoltaic (PV) module are determined using Genetic Algorithm (GA) method. This algorithm based on minimizing the absolute difference between the maximum powers obtained from module datasheet and the maximum power obtained from the mathematical model of the PV module, at different operating conditions. This method does not need to initial values, so these parameters of the PV module are easily obtained with high accuracy. To validate the proposed method, the results obtained from it are compared with the experimental results obtained from the PV module datasheet for different operating conditions. The results obtained from the proposed model are found to be very close compared to the results given in the datasheet of the PV module.", "title": "" }, { "docid": "2b4b639973f54bdd7b987d5bc9bb3978", "text": "Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution.", "title": "" }, { "docid": "c9b9ac230838ffaff404784b66862013", "text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .", "title": "" }, { "docid": "df833f98f7309a5ab5f79fae2f669460", "text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.", "title": "" }, { "docid": "eb101664f08f0c5c7cf6bcf8e058b180", "text": "Rapidly progressive renal failure (RPRF) is an initial clinical diagnosis in patients who present with progressive renal impairment of short duration. The underlying etiology may be a primary renal disease or a systemic disorder. Important differential diagnoses include vasculitis (systemic or renal-limited), systemic lupus erythematosus, multiple myeloma, thrombotic microangiopathy and acute interstitial nephritis. Good history taking, clinical examination and relevant investigations including serology and ultimately kidney biopsy are helpful in clinching the diagnosis. Early definitive diagnosis of RPRF is essential to reverse the otherwise relentless progression to end-stage kidney disease.", "title": "" }, { "docid": "6761bd757cdd672f60c980b081d4dbc8", "text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.", "title": "" }, { "docid": "450a0ffcd35400f586e766d68b75cc98", "text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.", "title": "" }, { "docid": "cf5e6ce7313d15f33afa668f27a5e9e2", "text": "Researchers have designed a variety of systems that promote wellness. However, little work has been done to examine how casual mobile games can help adults learn how to live healthfully. To explore this design space, we created OrderUP!, a game in which players learn how to make healthier meal choices. Through our field study, we found that playing OrderUP! helped participants engage in four processes of change identified by a well-established health behavior theory, the Transtheoretical Model: they improved their understanding of how to eat healthfully and engaged in nutrition-related analytical thinking, reevaluated the healthiness of their real life habits, formed helping relationships by discussing nutrition with others and started replacing unhealthy meals with more nutritious foods. Our research shows the promise of using casual mobile games to encourage adults to live healthier lifestyles.", "title": "" }, { "docid": "e0551738e41a48ce9105b1dc44dfa980", "text": "Abnormality detection in biomedical images is a one-class classification problem, where methods learn a statistical model to characterize the inlier class using training data solely from the inlier class. Typical methods (i) need well-curated training data and (ii) have formulations that are unable to utilize expert feedback through (a small amount of) labeled outliers. In contrast, we propose a novel deep neural network framework that (i) is robust to corruption and outliers in the training data, which are inevitable in real-world deployment, and (ii) can leverage expert feedback through high-quality labeled data. We introduce an autoencoder formulation that (i) gives robustness through a non-convex loss and a heavy-tailed distribution model on the residuals and (ii) enables semi-supervised learning with labeled outliers. Results on three large medical datasets show that our method outperforms the state of the art in abnormality-detection accuracy.", "title": "" }, { "docid": "5d417375c4ce7c47a90808971f215c91", "text": "While the RGB2GRAY conversion with fixed parameters is a classical and widely used tool for image decolorization, recent studies showed that adapting weighting parameters in a two-order multivariance polynomial model has great potential to improve the conversion ability. In this paper, by viewing the two-order model as the sum of three subspaces, it is observed that the first subspace in the two-order model has the dominating importance and the second and the third subspace can be seen as refinement. Therefore, we present a semiparametric strategy to take advantage of both the RGB2GRAY and the two-order models. In the proposed method, the RGB2GRAY result on the first subspace is treated as an immediate grayed image, and then the parameters in the second and the third subspace are optimized. Experimental results show that the proposed approach is comparable to other state-of-the-art algorithms in both quantitative evaluation and visual quality, especially for images with abundant colors and patterns. This algorithm also exhibits good resistance to noise. In addition, instead of the color contrast preserving ratio using the first-order gradient for decolorization quality metric, the color contrast correlation preserving ratio utilizing the second-order gradient is calculated as a new perceptual quality metric.", "title": "" }, { "docid": "ca4e3f243b2868445ecb916c081e108e", "text": "The task in the multi-agent path finding problem (MAPF) is to find paths for multiple agents, each with a different start and goal position, such that agents do not collide. It is possible to solve this problem optimally with algorithms that are based on the A* algorithm. Recently, we proposed an alternative algorithm called Conflict-Based Search (CBS) (Sharon et al. 2012), which was shown to outperform the A*-based algorithms in some cases. CBS is a two-level algorithm. At the high level, a search is performed on a tree based on conflicts between agents. At the low level, a search is performed only for a single agent at a time. While in some cases CBS is very efficient, in other cases it is worse than A*-based algorithms. This paper focuses on the latter case by generalizing CBS to Meta-Agent CBS (MA-CBS). The main idea is to couple groups of agents into meta-agents if the number of internal conflicts between them exceeds a given bound. MACBS acts as a framework that can run on top of any complete MAPF solver. We analyze our new approach and provide experimental results demonstrating that it outperforms basic CBS and other A*-based optimal solvers in many cases. Introduction and Background In the multi-agent path finding (MAPF) problem, we are given a graph, G(V,E), and a set of k agents labeled a1 . . . ak. Each agent ai has a start position si ∈ V and goal position gi ∈ V . At each time step an agent can either move to a neighboring location or can wait in its current location. The task is to return the least-cost set of actions for all agents that will move each of the agents to its goal without conflicting with other agents (i.e., without being in the same location at the same time or crossing the same edge simultaneously in opposite directions). MAPF has practical applications in robotics, video games, vehicle routing, and other domains (Silver 2005; Dresner & Stone 2008). In its general form, MAPF is NPcomplete, because it is a generalization of the sliding tile puzzle, which is NP-complete (Ratner & Warrnuth 1986). There are many variants to the MAPF problem. In this paper we consider the following common setting. The cumulative cost function to minimize is the sum over all agents of the number of time steps required to reach the goal location (Standley 2010; Sharon et al. 2011a). Both move Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and wait actions cost one. A centralized computing setting with a single CPU that controls all the agents is assumed. Note that a centralized computing setting is logically equivalent to a decentralized setting where each agent has its own computing power but agents are fully cooperative with full knowledge sharing and free communication. There are two main approaches for solving the MAPF in the centralized computing setting: the coupled and the decoupled approaches. In the decoupled approach, paths are planned for each agent separately. Algorithms from the decoupled approach run relatively fast, but optimality and even completeness are not always guaranteed (Silver 2005; Wang & Botea 2008; Jansen & Sturtevant 2008). New complete (but not optimal) decoupled algorithms were recently introduced for trees (Khorshid, Holte, & Sturtevant 2011) and for general graphs (Luna & Bekris 2011). Our aim is to solve the MAPF problem optimally and therefore the focus of this paper is on the coupled approach. In this approach MAPF is formalized as a global, singleagent search problem. One can activate an A*-based algorithm that searches a state space that includes all the different ways to permute the k agents into |V | locations. Consequently, the state space that is searched by the A*-based algorithms grow exponentially with the number of agents. Hence, finding the optimal solutions with A*-based algorithms requires significant computational expense. Previous optimal solvers dealt with this large search space in several ways. Ryan (2008; 2010) abstracted the problem into pre-defined structures such as cliques, halls and rings. He then modeled and solved the problem as a CSP problem. Note that the algorithm Ryan proposed does not necessarily returns the optimal solutions. Standley (2010; 2011) partitioned the given problem into smaller independent problems, if possible. Sharon et. al. (2011a; 2011b) suggested the increasing cost search tree (ICTS) a two-level framework where the high-level phase searches a tree with exact path costs for each of the agents and the low-level phase aims to verify whether there is a solution of this cost. In this paper we focus on the new Conflict Based Search algorithm (CBS) (Sharon et al. 2012) which optimally solves MAPF. CBS is a two-level algorithm where the highlevel search is performed on a constraint tree (CT) whose nodes include constraints on time and locations of a single agent. At each node in the constraint tree a low-level search is performed to find individual paths for all agents under the constraints given by the high-level node. Sharon et al. (2011a; 2011b; 2012) showed that the behavior of optimal MAPF algorithms can be very sensitive to characteristics of the given problem instance such as the topology and size of the graph, the number of agents, the branching factor etc. There is no universally dominant algorithm; different algorithms work well in different circumstances. In particular, experimental results have shown that CBS can significantly outperform all existing optimal MAPF algorithms on some domains (Sharon et al. 2012). However, Sharon et al. (2012) also identified cases where the CBS algorithm performs poorly. In such cases, CBS may even perform exponentially worse than A*. In this paper we aim at mitigating the worst-case performance of CBS by generalizing CBS into a new algorithm called Meta-agent CBS (MA-CBS). In MA-CBS the number of conflicts allowed at the high-level phase between any pair of agents is bounded by a predefined parameter B. When the number of conflicts exceed B, the conflicting agents are merged into a meta-agent and then treated as a joint composite agent by the low-level solver. By bounding the number of conflicts between any pair of agents, we prevent the exponential worst-case of basic CBS. This results in an new MAPF solver that significantly outperforms existing algorithms in a variety of domains. We present both theoretical and empirical support for this claim. In the low-level search, MA-CBS can use any complete MAPF solver. Thus, MA-CBS can be viewed as a solving framework and future MAPF algorithms could also be used by MA-CBS to improve its performance. Furthermore, we show that the original CBS algorithm corresponds to the extreme cases where B = ∞ (never merge agents), and the Independence Dependence (ID) framework (Standley 2010) is the other extreme case where B = 0 (always merge agents when conflicts occur). Thus, MA-CBS allows a continuum between CBS and ID, by setting different values of B between these two extremes. The Conflict Based Search Algorithm (CBS) The MA-CBS algorithm presented in this paper is based on the CBS algorithm (Sharon et al. 2012). We thus first describe the CBS algorithm in detail. Definitions for CBS We use the term path only in the context of a single agent and use the term solution to denote a set of k paths for the given set of k agents. A constraint for a given agent ai is a tuple (ai, v, t) where agent ai is prohibited from occupying vertex v at time step t.1 During the course of the algorithm, agents are associated with constraints. A consistent path for agent ai is a path that satisfies all its constraints. Likewise, a consistent solution is a solution that is made up from paths, such that the path for agent ai is consistent with the constraints of ai. A conflict is a tuple (ai, aj , v, t) where agent ai and agent aj occupy vertex v at time point t. A solution (of k paths) is valid if all its A conflict (as well as a constraint) may apply also to an edge when two agents traverse the same edge in opposite directions. paths have no conflicts. A consistent solution can be invalid if, despite the fact that the paths are consistent with their individual agent constraints, these paths still have conflicts. The key idea of CBS is to grow a set of constraints for each of the agents and find paths that are consistent with these constraints. If these paths have conflicts, and are thus invalid, the conflicts are resolved by adding new constraints. CBS works in two levels. At the high-level phase conflicts are found and constraints are added. At the low-level phase, the paths of the agents are updated to be consistent with the new constraints. We now describe each part of this process. High-level: Search the Constraint Tree (CT) At the high-level, CBS searches a constraint tree (CT). A CT is a binary tree. Each node N in the CT contains the following fields of data: 1. A set of constraints (N.constraints). The root of the CT contains an empty set of constraints. The child of a node in the CT inherits the constraints of the parent and adds one new constraint for one agent. 2. A solution (N.solution). A set of k paths, one path for each agent. The path for agent ai must be consistent with the constraints of ai. Such paths are found by the lowlevel search algorithm. 3. The total cost (N.cost). The cost of the current solution (summation over all the single-agent path costs). We denote this cost the f -value of the node. Node N in the CT is a goal node when N.solution is valid, i.e., the set of paths for all agents have no conflicts. The high-level phase performs a best-first search on the CT where nodes are ordered by their costs. Processing a node in the CT Given the list of constraints for a node N of the CT, the low-level search is invoked. This search returns one shortest path for each agent, ai, that is consistent with all the constraints associated with ai in node N . Once a consistent path has be", "title": "" }, { "docid": "348c62670a729da42654f0cf685bba53", "text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.", "title": "" }, { "docid": "f4b06b8993396fc099abf857d5155730", "text": "The Self-Organizing Map (SOM) forms a nonlinear projection from a high-dimensional data manifold onto a low-dimensional grid. A representative model of some subset of data is associated with each grid point. The SOM algorithm computes an optimal collection of models that approximates the data in the sense of some error criterion and also takes into account the similarity relations of the models. The models then become ordered on the grid according to their similarity. When the SOM is used for the exploration of statistical data, the data vectors can be approximated by models of the same dimensionality. When mapping documents, one can represent them statistically by their word frequency histograms or some reduced representations of the histograms that can be regarded as data vectors. We have made SOMs of collections of over one million documents. Each document is mapped onto some grid point, with a link from this point to the document database. The documents are ordered on the grid according to their contents and neighboring documents can be browsed readily. Keywords or key texts can be used to search for the most relevant documents rst. New eeective coding and computing schemes of the mapping are described.", "title": "" }, { "docid": "f2334ce1d717a8f6e91771f95a00b46e", "text": "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1.", "title": "" }, { "docid": "33f0a2bbda3f701dab66a8ffb67d5252", "text": "Microglia, the resident macrophages of the CNS, are exquisitely sensitive to brain injury and disease, altering their morphology and phenotype to adopt a so-called activated state in response to pathophysiological brain insults. Morphologically activated microglia, like other tissue macrophages, exist as many different phenotypes, depending on the nature of the tissue injury. Microglial responsiveness to injury suggests that these cells have the potential to act as diagnostic markers of disease onset or progression, and could contribute to the outcome of neurodegenerative diseases. The persistence of activated microglia long after acute injury and in chronic disease suggests that these cells have an innate immune memory of tissue injury and degeneration. Microglial phenotype is also modified by systemic infection or inflammation. Evidence from some preclinical models shows that systemic manipulations can ameliorate disease progression, although data from other models indicates that systemic inflammation exacerbates disease progression. Systemic inflammation is associated with a decline in function in patients with chronic neurodegenerative disease, both acutely and in the long term. The fact that diseases with a chronic systemic inflammatory component are risk factors for Alzheimer disease implies that crosstalk occurs between systemic inflammation and microglia in the CNS.", "title": "" }, { "docid": "020545bf4a1050c8c45d5df57df2fed5", "text": "Relational XQuery systems try to re-use mature relational data management infrastructures to create fast and scalable XML database technology. This paper describes the main features, key contributions, and lessons learned while implementing such a system. Its architecture consists of (i) a range-based encoding of XML documents into relational tables, (ii) a compilation technique that translates XQuery into a basic relational algebra, (iii) a restricted (order) property-aware peephole relational query optimization strategy, and (iv) a mapping from XML update statements into relational updates. Thus, this system implements all essential XML database functionalities (rather than a single feature) such that we can learn from the full consequences of our architectural decisions. While implementing this system, we had to extend the state-of-the-art with a number of new technical contributions, such as loop-lifted staircase join and efficient relational query evaluation strategies for XQuery theta-joins with existential semantics. These contributions as well as the architectural lessons learned are also deemed valuable for other relational back-end engines. The performance and scalability of the resulting system is evaluated on the XMark benchmark up to data sizes of 11GB. The performance section also provides an extensive benchmark comparison of all major XMark results published previously, which confirm that the goal of purely relational XQuery processing, namely speed and scalability, was met.", "title": "" }, { "docid": "b13ccc915f81eca45048ffe9d5da5d4f", "text": "Mobile robots are increasingly being deployed in the real world in response to a heightened demand for applications such as transportation, delivery and inspection. The motion planning systems for these robots are expected to have consistent performance across the wide range of scenarios that they encounter. While state-of-the-art planners, with provable worst-case guarantees, can be employed to solve these planning problems, their finite time performance varies across scenarios. This thesis proposes that the planning module for a robot must adapt its search strategy to the distribution of planning problems encountered to achieve real-time performance. We address three principal challenges of this problem. Firstly, we show that even when the planning problem distribution is fixed, designing a nonadaptive planner can be challenging as the performance of planning strategies fluctuates with small changes in the environment. We characterize the existence of complementary strategies and propose to hedge our bets by executing a diverse ensemble of planners. Secondly, when the distribution is varying, we require a meta-planner that can automatically select such an ensemble from a library of black-box planners. We show that greedily training a list of predictors to focus on failure cases leads to an effective meta-planner. For situations where we have no training data, we show that we can learn an ensemble on-the-fly by adopting algorithms from online paging theory. Thirdly, in the interest of efficiency, we require a white-box planner that directly adapts its search strategy during a planning cycle. We propose an efficient procedure for training adaptive search heuristics in a data-driven imitation learning framework. We also draw a novel connection to Bayesian active learning, and propose algorithms to adaptively evaluate edges of a graph. Our approach leads to the synthesis of a robust real-time planning module that allows a UAV to navigate seamlessly across environments and speed-regimes. We evaluate our framework on a spectrum of planning problems and show closed-loop results on 3 UAV platforms a full-scale autonomous helicopter, a large scale hexarotor and a small quadrotor. While the thesis was motivated by mobile robots, we have shown that the individual algorithms are broadly applicable to other problem domains such as informative path planning and manipulation planning. We also establish novel connections between the disparate fields of motion planning and active learning, imitation learning and online paging which opens doors to several new research problems.", "title": "" }, { "docid": "2d5dba872d7cd78a9e2d57a494a189ea", "text": "In this chapter, we give an overview of what ontologies are and how they can be used. We discuss the impact of the expressiveness, the number of domain elements, the community size, the conceptual dynamics, and other variables on the feasibility of an ontology project. Then, we break down the general promise of ontologies of facilitating the exchange and usage of knowledge to six distinct technical advancements that ontologies actually provide, and discuss how this should influence design choices in ontology projects. Finally, we summarize the main challenges of ontology management in real-world applications, and explain which expectations from practitioners can be met as", "title": "" }, { "docid": "4ecc1775c64b7ccc2904070d3657948d", "text": "Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm based on the maximum likelihood principle, is generally only guaranteed to find stationary points of the likelihood objective, and these points may be far from any maximizer. This article addresses this disconnect between the statistical principles behind EM and its algorithmic properties. Specifically, it provides a global analysis of EM for specific models in which the observations comprise an i.i.d. sample from a mixture of two Gaussians. This is achieved by (i) studying the sequence of parameters from idealized execution of EM in the infinite sample limit, and fully characterizing the limit points of the sequence in terms of the initial parameters; and then (ii) based on this convergence analysis, establishing statistical consistency (or lack thereof) for the actual sequence of parameters produced by EM.", "title": "" }, { "docid": "c175910d1809ad6dc073f79e4ca15c0c", "text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.", "title": "" } ]
scidocsrr
c15c96d13564b43d356f34dba3b66f10
Neural Joking Machine : Humorous image captioning
[ { "docid": "81b242e3c98eaa20e3be0a9777aa3455", "text": "Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.", "title": "" } ]
[ { "docid": "f94ef71233db13830d29ef9a0802f140", "text": "In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.", "title": "" }, { "docid": "268a86c25f1974630fada777790b162b", "text": "The paper presents a novel method and system for personalised (individualised) modelling of spatio/spectro-temporal data (SSTD) and prediction of events. A novel evolving spiking neural network reservoir system (eSNNr) is proposed for the purpose. The system consists of: spike-time encoding module of continuous value input information into spike trains; a recurrent 3D SNNr; eSNN as an evolving output classifier. Such system is generated for every new individual, using existing data of similar individuals. Subject to proper training and parameter optimisation, the system is capable of accurate spatiotemporal pattern recognition (STPR) and of early prediction of individual events. The method and the system are generic, applicable to various SSTD and classification and prediction problems. As a case study, the method is applied for early prediction of occurrence of stroke on an individual basis. Preliminary experiments demonstrated a significant improvement in accuracy and time of event prediction when using the proposed method when compared with standard machine learning methods, such as MLR, SVM, MLP. Future development and applications are discussed.", "title": "" }, { "docid": "1521052e24aca6db9d2a03df05089c88", "text": "In this paper we suggest advanced IEEE 802.11ax TCP-aware scheduling strategies for optimizing the AP operation under transmission of unidirectional TCP traffic. Our scheduling strategies optimize the performance using the capability for Multi User transmissions over the Uplink, first introduced in IEEE 802.11ax, together with Multi User transmissions over the Downlink. They are based on Transmission Opportunities (TXOP) and we suggest three scheduling strategies determining the TXOP formation parameters. In one of the strategies one can control the achieved Goodput vs. the delay. We also assume saturated WiFi transmission queues. We show that with minimal Goodput degradation one can avoid considerable delays.", "title": "" }, { "docid": "7867544be1b36ffab85b02c63cb03922", "text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.", "title": "" }, { "docid": "7c4768707a3efd3791520576a8a78e23", "text": "The aim of this paper is to research the effectiveness of SMS verification by understanding the correlation between notification and verification of flood early warning messages. This study contributes to the design of the dissemination techniques for SMS as an early warning messages. The metrics used in this study are using user perceptions of tasks, which include the ease of use (EOU) perception for using SMS and confidence with SMS skills perception, as well as, the users' positive perceptions, which include users' perception of usefulness and satisfaction perception towards using SMS as an early warning messages for floods. Experiments and surveys were conducted in flood-prone areas in Semarang, Indonesia. The results showed that the correlation is in users' perceptions of tasks for the confidence with skill.", "title": "" }, { "docid": "2d37baab58e7dd5c442b9041d0995134", "text": "With the growing problem of childhood obesity, recent research has begun to focus on family and social influences on children's eating patterns. Research has demonstrated that children's eating patterns are strongly influenced by characteristics of both the physical and social environment. With regard to the physical environment, children are more likely to eat foods that are available and easily accessible, and they tend to eat greater quantities when larger portions are provided. Additionally, characteristics of the social environment, including various socioeconomic and sociocultural factors such as parents' education, time constraints, and ethnicity influence the types of foods children eat. Mealtime structure is also an important factor related to children's eating patterns. Mealtime structure includes social and physical characteristics of mealtimes including whether families eat together, TV-viewing during meals, and the source of foods (e.g., restaurants, schools). Parents also play a direct role in children's eating patterns through their behaviors, attitudes, and feeding styles. Interventions aimed at improving children's nutrition need to address the variety of social and physical factors that influence children's eating patterns.", "title": "" }, { "docid": "3deced64cd17210f7e807e686c0221af", "text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.", "title": "" }, { "docid": "38a10f18aa943c53892ee995173e773d", "text": "This project aims at studying how recent interactive and interactions technologies would help extend how we play the guitar, thus defining the “multimodal guitar”. Our contributions target three main axes: audio analysis, gestural control and audio synthesis. For this purpose, we designed and developed a freely-available toolbox for augmented guitar performances, compliant with the PureData and Max/MSP environments, gathering tools for: polyphonic pitch estimation, fretboard visualization and grouping, pressure sensing, modal synthesis, infinite sustain, rearranging looping and “smart” harmonizing.", "title": "" }, { "docid": "680fa29fcd41421a2b3b235555f0cb91", "text": "Brown adipose tissue (BAT) is the main site of adaptive thermogenesis and experimental studies have associated BAT activity with protection against obesity and metabolic diseases, such as type 2 diabetes mellitus and dyslipidaemia. Active BAT is present in adult humans and its activity is impaired in patients with obesity. The ability of BAT to protect against chronic metabolic disease has traditionally been attributed to its capacity to utilize glucose and lipids for thermogenesis. However, BAT might also have a secretory role, which could contribute to the systemic consequences of BAT activity. Several BAT-derived molecules that act in a paracrine or autocrine manner have been identified. Most of these factors promote hypertrophy and hyperplasia of BAT, vascularization, innervation and blood flow, processes that are all associated with BAT recruitment when thermogenic activity is enhanced. Additionally, BAT can release regulatory molecules that act on other tissues and organs. This secretory capacity of BAT is thought to be involved in the beneficial effects of BAT transplantation in rodents. Fibroblast growth factor 21, IL-6 and neuregulin 4 are among the first BAT-derived endocrine factors to be identified. In this Review, we discuss the current understanding of the regulatory molecules (the so-called brown adipokines or batokines) that are released by BAT that influence systemic metabolism and convey the beneficial metabolic effects of BAT activation. The identification of such adipokines might also direct drug discovery approaches for managing obesity and its associated chronic metabolic diseases.", "title": "" }, { "docid": "1ff4d4588826459f1d8d200d658b9907", "text": "BACKGROUND\nHealth promotion organizations are increasingly embracing social media technologies to engage end users in a more interactive way and to widely disseminate their messages with the aim of improving health outcomes. However, such technologies are still in their early stages of development and, thus, evidence of their efficacy is limited.\n\n\nOBJECTIVE\nThe study aimed to provide a current overview of the evidence surrounding consumer-use social media and mobile software apps for health promotion interventions, with a particular focus on the Australian context and on health promotion targeted toward an Indigenous audience. Specifically, our research questions were: (1) What is the peer-reviewed evidence of benefit for social media and mobile technologies used in health promotion, intervention, self-management, and health service delivery, with regard to smoking cessation, sexual health, and otitis media? and (2) What social media and mobile software have been used in Indigenous-focused health promotion interventions in Australia with respect to smoking cessation, sexual health, or otitis media, and what is the evidence of their effectiveness and benefit?\n\n\nMETHODS\nWe conducted a scoping study of peer-reviewed evidence for the effectiveness of social media and mobile technologies in health promotion (globally) with respect to smoking cessation, sexual health, and otitis media. A scoping review was also conducted for Australian uses of social media to reach Indigenous Australians and mobile apps produced by Australian health bodies, again with respect to these three areas.\n\n\nRESULTS\nThe review identified 17 intervention studies and seven systematic reviews that met inclusion criteria, which showed limited evidence of benefit from these interventions. We also found five Australian projects with significant social media health components targeting the Indigenous Australian population for health promotion purposes, and four mobile software apps that met inclusion criteria. No evidence of benefit was found for these projects.\n\n\nCONCLUSIONS\nAlthough social media technologies have the unique capacity to reach Indigenous Australians as well as other underserved populations because of their wide and instant disseminability, evidence of their capacity to do so is limited. Current interventions are neither evidence-based nor widely adopted. Health promotion organizations need to gain a more thorough understanding of their technologies, who engages with them, why they engage with them, and how, in order to be able to create successful social media projects.", "title": "" }, { "docid": "6f18fbbd62f753807ba77141f21d0cf6", "text": "[1] The Mw 6.6, 26 December 2003 Bam (Iran) earthquake was one of the first earthquakes for which Envisat advanced synthetic aperture radar (ASAR) data were available. Using interferograms and azimuth offsets from ascending and descending tracks, we construct a three-dimensional displacement field of the deformation due to the earthquake. Elastic dislocation modeling shows that the observed deformation pattern cannot be explained by slip on a single planar fault, which significantly underestimates eastward and upward motions SE of Bam. We find that the deformation pattern observed can be best explained by slip on two subparallel faults. Eighty-five percent of moment release occurred on a previously unknown strike-slip fault running into the center of Bam, with peak slip of over 2 m occurring at a depth of 5 km. The remainder occurred as a combination of strike-slip and thrusting motion on a southward extension of the previously mapped Bam Fault 5 km to the east.", "title": "" }, { "docid": "40479536efec6311cd735f2bd34605d7", "text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.", "title": "" }, { "docid": "95196bd9be49b426217b7d81fc51a04b", "text": "This paper builds on the idea that private sector logistics can and should be applied to improve the performance of disaster logistics but that before embarking on this the private sector needs to understand the core capabilities of humanitarian logistics. With this in mind, the paper walks us through the complexities of managing supply chains in humanitarian settings. It pinpoints the cross learning potential for both the humanitarian and private sectors in emergency relief operations as well as possibilities of getting involved through corporate social responsibility. It also outlines strategies for better preparedness and the need for supply chains to be agile, adaptable and aligned—a core competency of many humanitarian organizations involved in disaster relief and an area which the private sector could draw on to improve their own competitive edge. Finally, the article states the case for closer collaboration between humanitarians, businesses and academics to achieve better and more effective supply chains to respond to the complexities of today’s logistics be it the private sector or relieving the lives of those blighted by disaster. Journal of the Operational Research Society (2006) 57, 475–489. doi:10.1057/palgrave.jors.2602125 Published online 14 December 2005", "title": "" }, { "docid": "24297f719741f6691e5121f33bafcc09", "text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.", "title": "" }, { "docid": "71bc346237c5f97ac245dd7b7bbb497f", "text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.", "title": "" }, { "docid": "3b285e3bd36dfeabb80a2ab57470bdc5", "text": "This paper presents algorithms and a prototype system for hand tracking and hand posture recognition. Hand postures are represented in terms of hierarchies of multi-scale colour image features at different scales, with qualitative inter-relations in terms of scale, position and orientation. In each image, detection of multi-scale colour features is performed. Hand states are then simultaneously detected and tracked using particle filtering, with an extension of layered sampling referred to as hierarchical layered sampling. Experiments are presented showing that the performance of the system is substantially improved by performing feature detection in colour space and including a prior with respect to skin colour. These components have been integrated into a real-time prototype system, applied to a test problem of controlling consumer electronics using hand gestures. In a simplified demo scenario, this system has been successfully tested by participants at two fairs during 2001.", "title": "" }, { "docid": "dcd919590e0b6b52ea3a6be7378d5d25", "text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.", "title": "" }, { "docid": "c1b6934a3d18915a466aa69b6fe78bd4", "text": "The mucous gel maintains a neutral microclimate at the epithelial cell surface, which may play a role in both the prevention of gastroduodenal injury and the provision of an environment essential for epithelial restitution and regeneration after injury. Enhancement of the components of the mucous barrier by sucralfate may explain its therapeutic efficacy for upper gastrointestinal tract protection, repai, and healing. We studied the effect of sucralfate and its major soluble component, sucrose octasulfate (SOS), on the synthesis and release of gastric mucin and surface active phospholipid, utilizing an isolated canine gastric mucous cells in culture. We correlated these results with the effect of the agents on mucin synthesis and secretion utilizing explants of canine fundusin vitro. Sucralfate and SOS significantly stimulated phospholipid secretion by isolated canine mucous cells in culture (123% and 112% of control, respectively.) Indomethacin pretreatment siginificantly inhibited the effect of sucralfate, but not SOS, on the stimulation of phospholipid release. Administration of either sucralfate or SOS to the isolated canine mucous cells had no effect upon mucin synthesis or secretion using a sensitive immunoassay. Sucralfate and SOS did not stimulate mucin release in the canine explants; sucralfate significantly stimulated the synthesis of mucin, but only to 108% of that observed in untreated explants. No increase in PGE2 release was observed after sucralfate or SOS exposure to the isolated canine mucous cells. Our results suggest sucralfate affects the mucus barrier largely in a qualitative manner. No increase in mucin secretion or major effect on synthesis was notd, although a significant increase in surface active phospholipid release was observed. The lack of dose dependency of this effect, along with the results of the PGE2 assay, suggests the drug may act through a non-receptor-mediated mechanism to perturb the cell membrane and release surface active phospholipid. The enhancement of phospholipid release by sucralfate to augment the barrier function of gastric mucus may, in concert with other effects of the drug, strrengthen mucosal barrier function.", "title": "" }, { "docid": "5096194bcbfebd136c74c30b998fb1f3", "text": "This present study is designed to propose a conceptual framework extended from the previously advanced Theory of Acceptance Model (TAM). The framework makes it possible to examine the effects of social media, and perceived risk as the moderating effects between intention and actual purchase to be able to advance the Theory of Acceptance Model (TAM). 400 samples will be randomly selected among Saudi in Jeddah, Dammam and Riyadh. Data will be collected using questionnaire survey. As the research involves the analysis of numerical data, the assessment is carried out using Structural Equation Model (SEM). The hypothesis will be tested and the result is used to explain the proposed TAM. The findings from the present study will be beneficial for marketers to understand the intrinsic behavioral factors that influence consumers' selection hence avoid trial and errors in their advertising drives.", "title": "" }, { "docid": "92d5ebd49670681a5d43ba90731ae013", "text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.", "title": "" } ]
scidocsrr
6facc49979ae27f41164bba62992f4c6
Emotional Human Machine Conversation Generation Based on SeqGAN
[ { "docid": "f7696fca636f8959a1d0fbeba9b2fb67", "text": "With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model’s input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user’s emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.", "title": "" }, { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" }, { "docid": "33468c214408d645651871bd8018ed82", "text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.", "title": "" } ]
[ { "docid": "d38e5fa4adadc3e979c5de812599c78a", "text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.", "title": "" }, { "docid": "affbc18a3ba30c43959e37504b25dbdc", "text": "ion for Falsification Thomas Ball , Orna Kupferman , and Greta Yorsh 3 1 Microsoft Research, Redmond, WA, USA. Email: tball@microsoft.com, URL: research.microsoft.com/ ∼tball 2 Hebrew University, School of Eng. and Comp. Sci., Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: www.cs.huji.ac.il/ ∼orna 3 Tel-Aviv University, School of Comp. Sci., Tel-Aviv 69978, Israel. Email:gretay@post.tau.ac.il, URL: www.math.tau.ac.il/ ∼gretay Microsoft Research Technical Report MSR-TR-2005-50 Abstract. Abstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the conAbstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the concrete system. Specifically, if an abstract state satisfies a property ψ thenall the concrete states that correspond to a satisfyψ too. Since the ideal goal of proving a system correct involves many obstacles, the primary use of formal methods nowadays is fal ification. There, as intesting, the goal is to detect errors, rather than to prove correctness. In the falsification setting, we can say that an abstraction is sound if errors of the abstract system exist also in the concrete system. Specifically, if an abstract state a violates a propertyψ, thenthere existsa concrete state that corresponds to a and violatesψ too. An abstraction that is sound for falsification need not be sound for verification. This suggests that existing frameworks for abstraction for verification may be too restrictive when used for falsification, and that a new framework is needed in order to take advantage of the weaker definition of soundness in the falsification setting. We present such a framework, show that it is indeed stronger (than other abstraction frameworks designed for verification), demonstrate that it can be made even stronger by parameterizing its transitions by predicates, and describe how it can be used for falsification of branching-time and linear-time temporal properties, as well as for generating testing goals for a concrete system by reasoning about its abstraction.", "title": "" }, { "docid": "fbecc8c4a8668d403df85b4e52348f6e", "text": "Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurré.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.", "title": "" }, { "docid": "f00b9a311fb8b14100465c187c9e4659", "text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.", "title": "" }, { "docid": "fba0ff24acbe07e1204b5fe4c492ab72", "text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.", "title": "" }, { "docid": "f43ed3feda4e243a1cb77357b435fb52", "text": "Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeatedly generated text and high reward for “novel” and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines.1", "title": "" }, { "docid": "d90a66cf63abdc1d0caed64812de7043", "text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.", "title": "" }, { "docid": "955376cf6d04373c407987613d1c2bd1", "text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.", "title": "" }, { "docid": "56fa6f96657182ff527e42655bbd0863", "text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.", "title": "" }, { "docid": "c26eabb377db5f1033ec6d354d890a6f", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "a712b6efb5c869619864cd817c2e27e1", "text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.", "title": "" }, { "docid": "1072728cf72fe02d3e1f3c45bfc877b5", "text": "The annihilating filter-based low-rank Hanel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. Inspired by the recent mathematical discovery that links deep neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional re-gridding layer. Extensive numerical experiments show that the proposed deep learning method significantly outperforms the existing image-domain deep learning approaches.", "title": "" }, { "docid": "dc4a08d2b98f1e099227c4f80d0b84df", "text": "We address action temporal localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in action temporal localization via multi-stage segment-based 3D ConvNets: (1) a proposal stage identifies candidate segments in a long video that may contain actions; (2) a classification stage learns one-vs-all action classification model to serve as initialization for the localization stage; and (3) a localization stage fine-tunes on the model learnt in the classification stage to localize each action instance. We propose a novel loss function for the localization stage to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7% to 7.4% on MEXaction2 and increased from 15.0% to 19.0% on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.", "title": "" }, { "docid": "c21e999407da672be5bac4eaba950168", "text": "Software engineers are frequently faced with tasks that can be expressed as optimization problems. To support them with automation, search-based model-driven engineering combines the abstraction power of models with the versatility of meta-heuristic search algorithms. While current approaches in this area use genetic algorithms with xed mutation operators to explore the solution space, the e ciency of these operators may heavily depend on the problem at hand. In this work, we propose FitnessStudio, a technique for generating e cient problem-tailored mutation operators automatically based on a two-tier framework. The lower tier is a regular meta-heuristic search whose mutation operator is trained by an upper-tier search using a higher-order model transformation. We implemented this framework using the Henshin transformation language and evaluated it in a benchmark case, where the generated mutation operators enabled an improvement to the state of the art in terms of result quality, without sacri cing performance.", "title": "" }, { "docid": "5950aadef33caa371f0de304b2b4869d", "text": "Responding to a 2015 MISQ call for research on service innovation, this study develops a conceptual model of service innovation in higher education academic libraries. Digital technologies have drastically altered the delivery of information services in the past decade, raising questions about critical resources, their interaction with digital technologies, and the value of new services and their measurement. Based on new product development (NPD) and new service development (NSD) processes and the service-dominant logic (SDL) perspective, this research-in-progress presents a conceptual model that theorizes interactions between critical resources and digital technologies in an iterative process for delivery of service innovation in academic libraries. The study also suggests future research paths to confirm, expand, and validate the new service innovation model.", "title": "" }, { "docid": "1b063dfecff31de929383b8ab74f7f6b", "text": "This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the “Adam-type”, includes the popular algorithms such as Adam (Kingma & Ba, 2014) , AMSGrad (Reddi et al., 2018) , AdaGrad (Duchi et al., 2011). Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log T/ √ T ) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.", "title": "" }, { "docid": "8c03df6650b3e400bc5447916d01820a", "text": "People called night owls habitually have late bedtimes and late times of arising, sometimes suffering a heritable circadian disturbance called delayed sleep phase syndrome (DSPS). Those with DSPS, those with more severe progressively-late non-24-hour sleep-wake cycles, and those with bipolar disorder may share genetic tendencies for slowed or delayed circadian cycles. We searched for polymorphisms associated with DSPS in a case-control study of DSPS research participants and a separate study of Sleep Center patients undergoing polysomnography. In 45 participants, we resequenced portions of 15 circadian genes to identify unknown polymorphisms that might be associated with DSPS, non-24-hour rhythms, or bipolar comorbidities. We then genotyped single nucleotide polymorphisms (SNPs) in both larger samples, using Illumina Golden Gate assays. Associations of SNPs with the DSPS phenotype and with the morningness-eveningness parametric phenotype were computed for both samples, then combined for meta-analyses. Delayed sleep and \"eveningness\" were inversely associated with loci in circadian genes NFIL3 (rs2482705) and RORC (rs3828057). A group of haplotypes overlapping BHLHE40 was associated with non-24-hour sleep-wake cycles, and less robustly, with delayed sleep and bipolar disorder (e.g., rs34883305, rs34870629, rs74439275, and rs3750275 were associated with n=37, p=4.58E-09, Bonferroni p=2.95E-06). Bright light and melatonin can palliate circadian disorders, and genetics may clarify the underlying circadian photoperiodic mechanisms. After further replication and identification of the causal polymorphisms, these findings may point to future treatments for DSPS, non-24-hour rhythms, and possibly bipolar disorder or depression.", "title": "" }, { "docid": "b8dfe30c07f0caf46b3fc59406dbf017", "text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.", "title": "" }, { "docid": "139f750d4e53b86bc785785b7129e6ee", "text": "Enterprise Resource Planning (ERP) systems hold great promise for integrating business processes and have proven their worth in a variety of organizations. Yet the gains that they have enabled in terms of increased productivity and cost savings are often achieved in the face of daunting usability problems. While one frequently hears anecdotes about the difficulties involved in using ERP systems, there is little documentation of the types of problems typically faced by users. The purpose of this study is to begin addressing this gap by categorizing and describing the usability issues encountered by one division of a Fortune 500 company in the first years of its large-scale ERP implementation. This study also demonstrates the promise of using collaboration theory to evaluate usability characteristics of existing systems and to design new systems. Given the impressive results already achieved by some corporations with these systems, imagine how much more would be possible if understanding how to use them weren’t such an", "title": "" }, { "docid": "7b1b0e31384cb99caf0f3d8cf8134a53", "text": "Toxic epidermal necrolysis (TEN) is one of the most threatening adverse reactions to various drugs. No case of concomitant occurrence TEN and severe granulocytopenia following the treatment with cefuroxime has been reported to date. Herein we present a case of TEN that developed eighteen days of the initiation of cefuroxime axetil therapy for urinary tract infection in a 73-year-old woman with chronic renal failure and no previous history of allergic diathesis. The condition was associated with severe granulocytopenia and followed by gastrointestinal hemorrhage, severe sepsis and multiple organ failure syndrome development. Despite intensive medical treatment the patient died. The present report underlines the potential of cefuroxime to simultaneously induce life threatening adverse effects such as TEN and severe granulocytopenia. Further on, because the patient was also taking furosemide for chronic renal failure, the possible unfavorable interactions between the two drugs could be hypothesized. Therefore, awareness of the possible drug interaction is necessary, especially when given in conditions of their altered pharmacokinetics as in case of chronic renal failure.", "title": "" } ]
scidocsrr
fbbf8a4fae9225bb651a3199beed5417
Computation offloading and resource allocation for low-power IoT edge devices
[ { "docid": "16fbebf500be1bf69027d3a35d85362b", "text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.", "title": "" }, { "docid": "2c4babb483ddd52c9f1333cbe71a3c78", "text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.", "title": "" }, { "docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db", "text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.", "title": "" }, { "docid": "956799f28356850fda78a223a55169bf", "text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for", "title": "" }, { "docid": "bd820eea00766190675cd3e8b89477f2", "text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.", "title": "" } ]
[ { "docid": "e602ab2a2d93a8912869ae8af0925299", "text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.", "title": "" }, { "docid": "34883c8cef40a0e587295b6ece1b796b", "text": "Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT EnglishGerman/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points.", "title": "" }, { "docid": "fec8129b24f30d4dbb93df4dce7885e8", "text": "We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.", "title": "" }, { "docid": "13b9fd37b1cf4f15def39175157e12c5", "text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.", "title": "" }, { "docid": "f5fd1d6f15c9ef06c343378a6f7038a0", "text": "Wayfinding is part of everyday life. This study concentrates on the development of a conceptual model of human navigation in the U.S. Interstate Highway Network. It proposes three different levels of conceptual understanding that constitute the cognitive map: the Planning Level, the Instructional Level, and the Driver Level. This paper formally defines these three levels and examines the conceptual objects that comprise them. The problem treated here is a simpler version of the open problem of planning and navigating a multi-mode trip. We expect the methods and preliminary results found here for the Interstate system to apply to other systems such as river transportation networks and railroad networks.", "title": "" }, { "docid": "b22136f00469589c984081742c4605d3", "text": "Convolutional neural network (CNN), which comprises one or more convolutional and pooling layers followed by one or more fully-connected layers, has gained popularity due to its ability to learn fruitful representations from images or speeches, capturing local dependency and slight-distortion invariance. CNN has recently been applied to the problem of activity recognition, where 1D kernels are applied to capture local dependency over time in a series of observations measured at inertial sensors (3-axis accelerometers and gyroscopes). In this paper we present a multi-modal CNN where we use 2D kernels in both convolutional and pooling layers, to capture local dependency over time as well as spatial dependency over sensors. Experiments on benchmark datasets demonstrate the high performance of our multi-modal CNN, compared to several state of the art methods.", "title": "" }, { "docid": "7b5b9990bfef9d2baf28030123359923", "text": "a r t i c l e i n f o a b s t r a c t This review takes an evolutionary and chronological perspective on the development of strategic human resource management (SHRM) literature. We divide this body of work into seven themes that reflect the directions and trends researchers have taken over approximately thirty years of research. During this time the field took shape, developed rich conceptual foundations, and matured into a domain that has substantial influence on research activities in HR and related management disciplines. We trace how the field has evolved to its current state, articulate many of the major findings and contributions, and discuss how we believe it will evolve in the future. This approach contributes to the field of SHRM by synthesizing work in this domain and by highlighting areas of research focus that have received perhaps enough attention, as well as areas of research focus that, while promising, have remained largely unexamined. 1. Introduction Boxall, Purcell, and Wright (2007) distinguish among three major subfields of human resource management (HRM): micro HRM (MHRM), strategic HRM (SHRM), and international HRM (IHRM). Micro HRM covers the subfunctions of HR policy and practice and consists of two main categories: one with managing individuals and small groups (e.g., recruitment, selection, induction, training and development, performance management, and remuneration) and the other with managing work organization and employee voice systems (including union-management relations). Strategic HRM covers the overall HR strategies adopted by business units and companies and tries to measure their impacts on performance. Within this domain both design and execution issues are examined. International HRM covers HRM in companies operating across national boundaries. Since strategic HRM often covers the international context, we will include those international HRM articles that have a strategic focus. While most of the academic literature on SHRM has been published in the last 30 years, the intellectual roots of the field can be traced back to the 1920s in the U.S. (Kaufman, 2001). The concept of labor as a human resource and the strategic view of HRM policy and practice were described and discussed by labor economists and industrial relations scholars of that period, such as John Commons. Progressive companies in the 1920s intentionally formulated and adopted innovative HR practices that represented a strategic approach to the management of labor. A small, but visibly elite group of employers in this time period …", "title": "" }, { "docid": "4f537c9e63bbd967e52f22124afa4480", "text": "Computer role playing games engage players through interleaved story and open-ended game play. We present an approach to procedurally generating, rendering, and making playable novel games based on a priori unknown story structures. These stories may be authored by humans or by computational story generation systems. Our approach couples player, designer, and algorithm to generate a novel game using preferences for game play style, general design aesthetics, and a novel story structure. Our approach is implemented in Game Forge, a system that uses search-based optimization to find and render a novel game world configuration that supports a sequence of plot points plus play style preferences. Additionally, Game Forge supports execution of the game through reactive control of game world logic and non-player character behavior.", "title": "" }, { "docid": "0846f7d40f5cbbd4c199dfb58c4a4e7d", "text": "While active learning has drawn broad attention in recent years, there are relatively few studies on stopping criterion for active learning. We here propose a novel model stability based stopping criterion, which considers the potential of each unlabeled examples to change the model once added to the training set. The underlying motivation is that active learning should terminate when the model does not change much by adding remaining examples. Inspired by the widely used stochastic gradient update rule, we use the gradient of the loss at each candidate example to measure its capability to change the classifier. Under the model change rule, we stop active learning when the changing ability of all remaining unlabeled examples is less than a given threshold. We apply the stability-based stopping criterion to two popular classifiers: logistic regression and support vector machines (SVMs). It can be generalized to a wide spectrum of learning models. Substantial experimental results on various UCI benchmark data sets have demonstrated that the proposed approach outperforms state-of-art methods in most cases.", "title": "" }, { "docid": "d6136f26c7b387693a5f017e6e2e679a", "text": "Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology.", "title": "" }, { "docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22", "text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.", "title": "" }, { "docid": "160ab7f4c7be89ae2d56a7094e19d1a3", "text": "These days, microarray gene expression data are playing an essential role in cancer classifications. However, due to the availability of small number of effective samples compared to the large number of genes in microarray data, many computational methods have failed to identify a small subset of important genes. Therefore, it is a challenging task to identify small number of disease-specific significant genes related for precise diagnosis of cancer sub classes. In this paper, particle swarm optimization (PSO) method along with adaptive K-nearest neighborhood (KNN) based gene selection technique are proposed to distinguish a small subset of useful genes that are sufficient for the desired classification purpose. A proper value of K would help to form the appropriate numbers of neighborhood to be explored and hence to classify the dataset accurately. Thus, a heuristic for selecting the optimal values of K efficiently, guided by the classification accuracy is also proposed. This proposed technique of finding minimum possible meaningful set of genes is applied on three benchmark microarray datasets, namely the small round blue cell tumor (SRBCT) data, the acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) data and the mixed-lineage leukemia (MLL) data. Results demonstrate the usefulness of the proposed method in terms of classification accuracy on blind test samples, number of informative genes and computing time. Further, the usefulness and universal characteristics of the identified genes are reconfirmed by using different classifiers, such as support vector machine (SVM). 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "dda8427a6630411fc11e6d95dbff08b9", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "ec7f5b4596ae6e2c24856d16e4fdc193", "text": "This prospective, randomized study evaluated continuous-flow cold therapy for postoperative pain in outpatient arthroscopic anterior cruciate ligament (ACL) reconstructions. In group 1, cold therapy was constant for 3 days then as needed in days 4 through 7. Group 2 had no cold therapy. Evaluations and diaries were kept at 1, 2, and 8 hours after surgery, and then daily. Pain was assessed using the VAS and Likert scales. There were 51 cold and 49 noncold patients included. Continuous passive movement (CPM) use averaged 54 hours for cold and 41 hours for noncold groups (P=.003). Prone hangs were done for 192 minutes in the cold group and 151 minutes in the noncold group. Motion at 1 week averaged 5/88 for the cold group and 5/79 the noncold group. The noncold group average visual analog scale (VAS) pain and Likert pain scores were always greater than the cold group. The noncold group average Vicodin use (Knoll, Mt. Olive, NJ) was always greater than the cold group use (P=.001). Continuous-flow cold therapy lowered VAS and Likert scores, reduced Vicodin use, increased prone hangs, CPM, and knee flexion. Continuous-flow cold therapy is safe and effective for outpatient ACL reconstruction reducing pain medication requirements.", "title": "" }, { "docid": "496e57bd6a6d06123ae886e0d6753783", "text": "With the enormous growth of digital content in internet, various types of online reviews such as product and movie reviews present a wealth of subjective information that can be very helpful for potential users. Sentiment analysis aims to use automated tools to detect subjective information from reviews. Up to now as there are few researches conducted on feature selection in sentiment analysis, there are very rare works for Persian sentiment analysis. This paper considers the problem of sentiment classification using different feature selection methods for online customer reviews in Persian language. Three of the challenges of Persian text are using of a wide variety of declensional suffixes, different word spacing and many informal or colloquial words. In this paper we study these challenges by proposing a model for sentiment classification of Persian review documents. The proposed model is based on stemming and feature selection and is employed Naive Bayes algorithm for classification. We evaluate the performance of the model on a collection of cellphone reviews, where the results show the effectiveness of the proposed approaches.", "title": "" }, { "docid": "7bac448a5754c168c897125a4f080548", "text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.", "title": "" }, { "docid": "944d467bb6da4991127b76310fec585b", "text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.", "title": "" }, { "docid": "3f220d8863302719d3cf69b7d99f8c4e", "text": "The numerical representation precision required by the computations performed by Deep Neural Networks (DNNs) varies across networks and between layers of a same network. This observation motivates a precision-based approach to acceleration which takes into account both the computational structure and the required numerical precision representation. This work presents <italic>Stripes</italic> (<italic>STR</italic>), a hardware accelerator that uses bit-serial computations to improve energy efficiency and performance. Experimental measurements over a set of state-of-the-art DNNs for image classification show that <italic>STR</italic> improves performance over a state-of-the-art accelerator from 1.35<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq1-2597140.gif\"/></alternatives></inline-formula> to 5.33<inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives><inline-graphic xlink:href=\"judd-ieq2-2597140.gif\"/> </alternatives></inline-formula> and by 2.24<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math> <alternatives><inline-graphic xlink:href=\"judd-ieq3-2597140.gif\"/></alternatives></inline-formula> on average. <italic>STR</italic>’s area and power overhead are estimated at 5 percent and 12 percent respectively. <italic> STR</italic> is 2.00<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq4-2597140.gif\"/></alternatives></inline-formula> more energy efficient than the baseline.", "title": "" }, { "docid": "c613a7c8bca5b0c198d2a1885ecb0efb", "text": "Botnets have traditionally been seen as a threat to personal computers; however, the recent shift to mobile platforms resulted in a wave of new botnets. Due to its popularity, Android mobile Operating System became the most targeted platform. In spite of rising numbers, there is a significant gap in understanding the nature of mobile botnets and their communication characteristics. In this paper, we address this gap and provide a deep analysis of Command and Control (C&C) and built-in URLs of Android botnets detected since the first appearance of the Android platform. By combining both static and dynamic analyses with visualization, we uncover the relationships between the majority of the analyzed botnet families and offer an insight into each malicious infrastructure. As a part of this study we compile and offer to the research community a dataset containing 1929 samples representing 14 Android botnet families.", "title": "" }, { "docid": "a091e8885bd30e58f6de7d14e8170199", "text": "This paper represents the design and implementation of an indoor based navigation system for visually impaired people using a path finding algorithm and a wearable cap. This development of the navigation system consists of two modules: a Wearable part and a schematic of the area where the navigation system works by guiding the user. The wearable segment consists of a cap designed with IR receivers, an Arduino Nano processor, a headphone and an ultrasonic sensor. The schematic segment plans for the movement directions inside a room by dividing the room area into cells with a predefined matrix containing location information. For navigating the user, sixteen IR transmitters which continuously monitor the user position are placed at equal interval in the XY (8 in X-plane and 8 in Y-plane) directions of the indoor environment. A Braille keypad is used by the user where he gave the cell number for determining destination position. A path finding algorithm has been developed for determining the position of the blind person and guide him/her to his/her destination. The developed algorithm detects the position of the user by receiving continuous data from transmitter and guide the user to his/her destination by voice command. The ultrasonic sensor mounted on the cap detects the obstacles along the pathway of the visually impaired person. This proposed navigation system does not require any complex infrastructure design or the necessity of holding any extra assistive device by the user (i.e. augmented cane, smartphone, cameras). In the proposed design, prerecorded voice command will provide movement guideline to every edge of the indoor environment according to the user's destination choice. This makes this navigation system relatively simple and user friendly for those who are not much familiar with the most advanced technology and people with physical disabilities. Moreover, this proposed navigation system does not need GPS or any telecommunication networks which makes it suitable for use in rural areas where there is no telecommunication network coverage. In conclusion, the proposed system is relatively cheaper to implement in comparison to other existing navigation system, which will contribute to the betterment of the visually impaired people's lifestyle of developing and under developed countries.", "title": "" } ]
scidocsrr
98199af516cd71aed3d6f88f3d9e743f
Three-Port Series-Resonant DC–DC Converter to Interface Renewable Energy Sources With Bidirectional Load and Energy Storage Ports
[ { "docid": "8b70670fa152dbd5185e80136983ff12", "text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells", "title": "" }, { "docid": "3b8033d8d68e5e9889df190d93800f85", "text": "A three-port triple-half-bridge bidirectional dc-dc converter topology is proposed in this paper. The topology comprises a high-frequency three-winding transformer and three half-bridges, one of which is a boost half-bridge interfacing a power port with a wide operating voltage. The three half-bridges are coupled by the transformer, thereby providing galvanic isolation for all the power ports. The converter is controlled by phase shift, which achieves the primary power flow control, in combination with pulsewidth modulation (PWM). Because of the particular structure of the boost half-bridge, voltage variations at the port can be compensated for by operating the boost half-bridge, together with the other two half-bridges, at an appropriate duty cycle to keep a constant voltage across the half-bridge. The resulting waveforms applied to the transformer windings are asymmetrical due to the automatic volt-seconds balancing of the half-bridges. With the PWM control it is possible to reduce the rms loss and to extend the zero-voltage switching operating range to the entire phase shift region. A fuel cell and supercapacitor generation system is presented as an embodiment of the proposed multiport topology. The theoretical considerations are verified by simulation and with experimental results from a 1 kW prototype.", "title": "" }, { "docid": "149d9a316e4c5df0c9300d26da685bc6", "text": "Multiport dc-dc converters are particularly interesting for sustainable energy generation systems where diverse sources and storage elements are to be integrated. This paper presents a zero-voltage switching (ZVS) three-port bidirectional dc-dc converter. A simple and effective duty ratio control method is proposed to extend the ZVS operating range when input voltages vary widely. Soft-switching conditions over the full operating range are achievable by adjusting the duty ratio of the voltage applied to the transformer winding in response to the dc voltage variations at the port. Keeping the volt-second product (half-cycle voltage-time integral) equal for all the windings leads to ZVS conditions over the entire operating range. A detailed analysis is provided for both the two-port and the three-port converters. Furthermore, for the three-port converter a dual-PI-loop based control strategy is proposed to achieve constant output voltage, power flow management, and soft-switching. The three-port converter is implemented and tested for a fuel cell and supercapacitor system.", "title": "" } ]
[ { "docid": "2801a5a26d532fc33543744ea89743f1", "text": "Microalgae have received much interest as a biofuel feedstock in response to the uprising energy crisis, climate change and depletion of natural sources. Development of microalgal biofuels from microalgae does not satisfy the economic feasibility of overwhelming capital investments and operations. Hence, high-value co-products have been produced through the extraction of a fraction of algae to improve the economics of a microalgae biorefinery. Examples of these high-value products are pigments, proteins, lipids, carbohydrates, vitamins and anti-oxidants, with applications in cosmetics, nutritional and pharmaceuticals industries. To promote the sustainability of this process, an innovative microalgae biorefinery structure is implemented through the production of multiple products in the form of high value products and biofuel. This review presents the current challenges in the extraction of high value products from microalgae and its integration in the biorefinery. The economic potential assessment of microalgae biorefinery was evaluated to highlight the feasibility of the process.", "title": "" }, { "docid": "39debcb0aa41eec73ff63a4e774f36fd", "text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.", "title": "" }, { "docid": "31122e142e02b7e3b99c52c8f257a92e", "text": "Impervious surface has been recognized as a key indicator in assessing urban environments. However, accurate impervious surface extraction is still a challenge. Effectiveness of impervious surface in urban land-use classification has not been well addressed. This paper explored extraction of impervious surface information from Landsat Enhanced Thematic Mapper data based on the integration of fraction images from linear spectral mixture analysis and land surface temperature. A new approach for urban land-use classification, based on the combined use of impervious surface and population density, was developed. Five urban land-use classes (i.e., low-, medium-, high-, and very-high-intensity residential areas, and commercial/industrial/transportation uses) were developed in the city of Indianapolis, Indiana, USA. Results showed that the integration of fraction images and surface temperature provided substantially improved impervious surface image. Accuracy assessment indicated that the rootmean-square error and system error yielded 9.22% and 5.68%, respectively, for the impervious surface image. The overall classification accuracy of 83.78% for five urban land-use classes was obtained. © 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "af0cfa757d5e419f4e0d00da20e2db8a", "text": "Vertebrate CpG islands (CGIs) are short interspersed DNA sequences that deviate significantly from the average genomic pattern by being GC-rich, CpG-rich, and predominantly nonmethylated. Most, perhaps all, CGIs are sites of transcription initiation, including thousands that are remote from currently annotated promoters. Shared DNA sequence features adapt CGIs for promoter function by destabilizing nucleosomes and attracting proteins that create a transcriptionally permissive chromatin state. Silencing of CGI promoters is achieved through dense CpG methylation or polycomb recruitment, again using their distinctive DNA sequence composition. CGIs are therefore generically equipped to influence local chromatin structure and simplify regulation of gene activity.", "title": "" }, { "docid": "e5b7402470ad6198b4c1ddb9d9878ea9", "text": "Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.", "title": "" }, { "docid": "462813402246b53bb4af46ca3b407195", "text": "We present the performance of a patient with acquired dysgraphia, DS, who has intact oral spelling (100% correct) but severely impaired written spelling (7% correct). Her errors consisted entirely of well-formed letter substitutions. This striking dissociation is further characterized by consistent preservation of orthographic, as opposed to phonological, length in her written output. This pattern of performance indicates that DS has intact graphemic representations, and that her errors are due to a deficit in letter shape assignment. We further interpret the occurrence of a small percentage of lexical errors in her written responses and a significant effect of letter frequencies and transitional probabilities on the pattern of letter substitutions as the result of a repair mechanism that locally constrains DS' written output.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "2dfad4f4b0d69085341dfb64d6b37d54", "text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.", "title": "" }, { "docid": "c2edf373d60d4165afec75d70117530d", "text": "In her book Introducing Arguments, Linda Pylkkänen distinguishes between the core and noncore arguments of verbs by means of a detailed discussion of applicative and causative constructions. The term applicative refers to structures that in more general linguistic terms are defined as ditransitive, i.e. when both a direct and an indirect object are associated with the verb, as exemplified in (1) (Pylkkänen, 2008: 13):", "title": "" }, { "docid": "72c164c281e98386a054a25677c21065", "text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.", "title": "" }, { "docid": "2f793fb05d0dbe43f20f2b73119aa402", "text": "Dark Web analysis is an important aspect in field of counter terrorism (CT). In the present scenario terrorist attacks are biggest problem for the mankind and whole world is under constant threat from these well-planned, sophisticated and coordinated terrorist operations. Terrorists anonymously set up various web sites embedded in the public Internet, exchanging ideology, spreading propaganda, and recruiting new members. Dark web is a hotspot where terrorists are communicating and spreading their messages. Now every country is focusing for CT. Dark web analysis can be an efficient proactive method for CT by detecting and avoiding terrorist threats/attacks. In this paper we have proposed dark web analysis model that analyzes dark web forums for CT and connecting the dots to prevent the country from terrorist attacks.", "title": "" }, { "docid": "21555c1ab91642c691a711f7b5868cda", "text": "Do men die young and sick, or do women live long and healthy? By trying to explain the sexual dimorphism in life expectancy, both biological and environmental aspects are presently being addressed. Besides age-related changes, both the immune and the endocrine system exhibit significant sex-specific differences. This review deals with the aging immune system and its interplay with sex steroid hormones. Together, they impact on the etiopathology of many infectious diseases, which are still the major causes of morbidity and mortality in people at old age. Among men, susceptibilities toward many infectious diseases and the corresponding mortality rates are higher. Responses to various types of vaccination are often higher among women thereby also mounting stronger humoral responses. Women appear immune-privileged. The major sex steroid hormones exhibit opposing effects on cells of both the adaptive and the innate immune system: estradiol being mainly enhancing, testosterone by and large suppressive. However, levels of sex hormones change with age. At menopause transition, dropping estradiol potentially enhances immunosenescence effects posing postmenopausal women at additional, yet specific risks. Conclusively during aging, interventions, which distinctively consider the changing level of individual hormones, shall provide potent options in maintaining optimal immune functions.", "title": "" }, { "docid": "d2c8a3fd1049713d478fe27bd8f8598b", "text": "In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.", "title": "" }, { "docid": "ff27912cfef17e66266bfcd013a874ee", "text": "The purpose of this note is to describe a useful lesson we learned on authentication protocol design. In a recent article [9], we presented a simple authentication protocol to illustrate the concept of a trusted server. The protocol has a flaw, which was brought to our attention by Mart~n Abadi of DEC. In what follows, we first describe the protocol and its flaw, and how the flaw-was introduced in the process of deriving the protocol from its correct full information version. We then introduce a principle, called the Principle of Full Information, and explain how its use could have prevented the protocol flaw. We believe the Principle of Full Information is a useful authentication protocol design principle, and advocate its use. Lastly, we present several heuristics for simplifying full information protocols and illustrate their application to a mutual authentication protocol.", "title": "" }, { "docid": "54380a4e0ab433be24d100db52e6bb55", "text": "Why do some new technologies emerge and quickly supplant incumbent technologies while others take years or decades to take off? We explore this question by presenting a framework that considers both the focal competing technologies as well as the ecosystems in which they are embedded. Within our framework, each episode of technology transition is characterized by the ecosystem emergence challenge that confronts the new technology and the ecosystem extension opportunity that is available to the old technology. We identify four qualitatively distinct regimes with clear predictions for the pace of substitution. Evidence from 10 episodes of technology transitions in the semiconductor lithography equipment industry from 1972 to 2009 offers strong support for our framework. We discuss the implication of our approach for firm strategy. Disciplines Management Sciences and Quantitative Methods This journal article is available at ScholarlyCommons: https://repository.upenn.edu/mgmt_papers/179 Innovation Ecosystems and the Pace of Substitution: Re-examining Technology S-curves Ron Adner Tuck School of Business, Dartmouth College Strategy and Management 100 Tuck Hall Hanover, NH 03755, USA Tel: 1 603 646 9185 Email:\t\r ron.adner@dartmouth.edu Rahul Kapoor The Wharton School University of Pennsylvania Philadelphia, PA-19104 Tel : 1 215 898 6458 Email: kapoorr@wharton.upenn.edu", "title": "" }, { "docid": "709021b1b7b7ddd073cac22abf26cf36", "text": "A video from a moving camera produces different number of observations of different scene areas. We can construct an attention map of the scene by bringing the frames to a common reference and counting the number of frames that observed each scene point. Different representations can be constructed from this. The base of the attention map gives the scene mosaic. Super-resolved images of parts of the scene can be obtained using a subset of observations or video frames. We can combine mosaicing with super-resolution by using all observations, but the magnification factor will vary across the scene based on the attention received. The height of the attention map indicates the amount of super-resolution for that scene point. We modify the traditional super-resolution framework to generate a varying resolution image for panning cameras in this paper. The varying resolution image uses all useful data available in a video. We introduce the concept of attention-based super-resolution and give the modified framework for it. We also show its applicability on a few indoor and outdoor videos.", "title": "" }, { "docid": "060101cf53a576336e27512431c4c4fc", "text": "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions.", "title": "" }, { "docid": "9c452434ad1c25d0fbe71138b6c39c4b", "text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.", "title": "" }, { "docid": "3e80dc7319f1241e96db42033c16f6b4", "text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.", "title": "" } ]
scidocsrr
2b6f51a00468b699236dbf09b625d81a
MLC Toolbox: A MATLAB/OCTAVE Library for Multi-Label Classification
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "d6d55f2f3c29605835305d3cc72a34ad", "text": "Most classification problems associate a single class to each example or instance. However, there are many classification tasks where each instance can be associated with one or more classes. This group of problems represents an area known as multi-label classification. One typical example of multi-label classification problems is the classification of documents, where each document can be assigned to more than one class. This tutorial presents the most frequently used techniques to deal with these problems in a pedagogical manner, with examples illustrating the main techniques and proposing a taxonomy of multi-label techniques that highlights the similarities and differences between these techniques.", "title": "" }, { "docid": "fbf57d773bcdd8096e77246b3f785a96", "text": "The explosion of online content has made the management of such content non-trivial. Web-related tasks such as web page categorization, news filtering, query categorization, tag recommendation, etc. often involve the construction of multi-label categorization systems on a large scale. Existing multi-label classification methods either do not scale or have unsatisfactory performance. In this work, we propose MetaLabeler to automatically determine the relevant set of labels for each instance without intensive human involvement or expensive cross-validation. Extensive experiments conducted on benchmark data show that the MetaLabeler tends to outperform existing methods. Moreover, MetaLabeler scales to millions of multi-labeled instances and can be deployed easily. This enables us to apply the MetaLabeler to a large scale query categorization problem in Yahoo!, yielding a significant improvement in performance.", "title": "" } ]
[ { "docid": "85cabd8a0c19f5db993edd34ded95d06", "text": "We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a “realistic” relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such conditional program generation are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on program sketches, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method.", "title": "" }, { "docid": "470ecc2bc4299d913125d307c20dd48d", "text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.", "title": "" }, { "docid": "76e75c4549cbaf89796355b299bedfdc", "text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.", "title": "" }, { "docid": "7b0d52753e359a6dff3847ff57c321ac", "text": "Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-term memory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.", "title": "" }, { "docid": "01b05ea8fcca216e64905da7b5508dea", "text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.", "title": "" }, { "docid": "e1b050e8dc79f363c4a2b956f384c8d5", "text": "Keyphrase extraction is a fundamental technique in natural language processing. It enables documents to be mapped to a concise set of phrases that can be used for indexing, clustering, ontology building, auto-tagging and other information organization schemes. Two major families of unsupervised keyphrase extraction algorithms may be characterized as statistical and graph-based. We present a hybrid statistical-graphical algorithm that capitalizes on the heuristics of both families of algorithms and is able to outperform the state of the art in unsupervised keyphrase extraction on several datasets.", "title": "" }, { "docid": "90f188c1f021c16ad7c8515f1244c08a", "text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.", "title": "" }, { "docid": "0fff38933ebaa8ecc2d891b0e742c567", "text": "The rates of different ATP-consuming reactions were measured in concanavalin A-stimulated thymocytes, a model system in which more than 80% of the ATP consumption can be accounted for. There was a clear hierarchy of the responses of different energy-consuming reactions to changes in energy supply: pathways of macromolecule biosynthesis (protein synthesis and RNA/DNA synthesis) were most sensitive to energy supply, followed by sodium cycling and then calcium cycling across the plasma membrane. Mitochondrial proton leak was the least sensitive to energy supply. Control analysis was used to quantify the relative control over ATP production exerted by the individual groups of ATP-consuming reactions. Control was widely shared; no block of reactions had more than one-third of the control. A fuller control analysis showed that there appeared to be a hierarchy of control over the flux through ATP: protein synthesis > RNA/DNA synthesis and substrate oxidation > Na+ cycling and Ca2+ cycling > other ATP consumers and mitochondrial proton leak. Control analysis also indicated that there was significant control over the rates of individual ATP consumers by energy supply. Each ATP consumer had strong control over its own rate but very little control over the rates of the other ATP consumers.", "title": "" }, { "docid": "ac76a4fe36e95d87f844c6876735b82f", "text": "Theoretical estimates indicate that graphene thin films can be used as transparent electrodes for thin-film devices such as solar cells and organic light-emitting diodes, with an unmatched combination of sheet resistance and transparency. We demonstrate organic light-emitting diodes with solution-processed graphene thin film transparent conductive anodes. The graphene electrodes were deposited on quartz substrates by spin-coating of an aqueous dispersion of functionalized graphene, followed by a vacuum anneal step to reduce the sheet resistance. Small molecular weight organic materials and a metal cathode were directly deposited on the graphene anodes, resulting in devices with a performance comparable to control devices on indium-tin-oxide transparent anodes. The outcoupling efficiency of devices on graphene and indium-tin-oxide is nearly identical, in agreement with model predictions.", "title": "" }, { "docid": "de40fc5103b26520e0a8c981019982b5", "text": "Return-Oriented Programming (ROP) is the cornerstone of today’s exploits. Yet, building ROP chains is predominantly a manual task, enjoying limited tool support. Many of the available tools contain bugs, are not tailored to the needs of exploit development in the real world and do not offer practical support to analysts, which is why they are seldom used for any tasks beyond gadget discovery. We present PSHAPE (P ractical Support for Half-Automated P rogram Exploitation), a tool which assists analysts in exploit development. It discovers gadgets, chains gadgets together, and ensures that side effects such as register dereferences do not crash the program. Furthermore, we introduce the notion of gadget summaries, a compact representation of the effects a gadget or a chain of gadgets has on memory and registers. These semantic summaries enable analysts to quickly determine the usefulness of long, complex gadgets that use a lot of aliasing or involve memory accesses. Case studies on nine real binaries representing 147 MiB of code show PSHAPE’s usefulness: it automatically builds usable ROP chains for nine out of eleven scenarios.", "title": "" }, { "docid": "c67fbc6e0a2a66e0855dcfc7a70cfb86", "text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.", "title": "" }, { "docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2", "text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.", "title": "" }, { "docid": "0db1e1304ec2b5d40790677c9ce07394", "text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.", "title": "" }, { "docid": "6c4c235c779d9e6a78ea36d7fc636df4", "text": "Digital archiving creates a vast store of knowledge that can be accessed only through digital tools. Users of this information will need fluency in the tools of digital access, exploration, visualization, analysis, and collaboration. This paper proposes that this fluency represents a new form of literacy, which must become fundamental for humanities scholars. Tools influence both the creation and the analysis of information. Whether using pen and paper, Microsoft Office, or Web 2.0, scholars base their process, production, and questions on the capabilities their tools offer them. Digital archiving and the interconnectivity of the Web provide new challenges in terms of quantity and quality of information. They create a new medium for presentation as well as a foundation for collaboration that is independent of physical location. Challenges for digital humanities include: • developing new genres for complex information presentation that can be shared, analyzed, and compared; • creating a literacy in information analysis and visualization that has the same rigor and richness as current scholarship; and • expanding classically text-based pedagogy to include simulation, animation, and spatial and geographic representation.", "title": "" }, { "docid": "4d6d315aed4535c15714f78c183ac196", "text": "Is narcissism related to observer-rated attractiveness? Two views imply that narcissism is unrelated to attractiveness: positive illusions theory and Feingold’s (1992) attractiveness theory (i.e., attractiveness is unrelated to personality in general). In contrast, two other views imply that narcissism is positively related to attractiveness: an evolutionary perspective on narcissism (i.e., selection pressures in shortterm mating contexts shaped the evolution of narcissism, including greater selection for attractiveness in short-term versus long-term mating contexts) and, secondly, the self-regulatory processing model of narcissism (narcissists groom themselves to bolster grandiose self-images). A meta-analysis (N > 1000) reveals a small but reliable positive narcissism–attractiveness correlation that approaches the largest known personality–attractiveness correlations. The finding supports the evolutionary and self-regulatory views of narcissism. 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "99476690b32f04c8a1ec04dcd779f8f7", "text": "This paper discusses the conception and development of a ball-on-plate balancing system based on mechatronic design principles. Realization of the design is achieved with the simultaneous consideration towards constraints like cost, performance, functionality, extendibility, and educational merit. A complete dynamic system investigation for the ball-on-plate system is presented in this paper. This includes hardware design, sensor and actuator selection, system modeling, parameter identification, controller design and experimental testing. The system was designed and built by students as part of the course Mechatronics System Design at Rensselaer. 1. MECHATRONICS AT RENSSELAER Mechatronics is the synergistic combination of mechanical engineering, electronics, control systems and computers. The key element in mechatronics is the integration of these areas through the design process. The essential characteristic of a mechatronics engineer and the key to success in mechatronics is a balance between two sets of skills: modeling / analysis skills and experimentation / hardware implementation skills. Synergism and integration in design set a mechatronic system apart from a traditional, multidisciplinary system. Mechanical engineers are expected to design with synergy and integration and professors must now teach design accordingly. In the Department of Mechanical Engineering, Aeronautical Engineering & Mechanics (ME, AE & M) at Rensselaer there are presently two seniorelective courses in the field of mechatronics, which are also open to graduate students: Mechatronics, offered in the fall semester, and Mechatronic System Design, offered in the spring semester. In both courses, emphasis is placed on a balance between physical understanding and mathematical formalities. The key areas of study covered in both courses are: 1. Mechatronic system design principles 2. Modeling, analysis, and control of dynamic physical systems 3. Selection and interfacing of sensors, actuators, and microcontrollers 4. Analog and digital control electronics 5. Real-time programming for control Mechatronics covers the fundamentals in these areas through integrated lectures and laboratory exercises, while Mechatronic System Design focuses on the application and extension of the fundamentals through a design, build, and test experience. Throughout the coverage, the focus is kept on the role of the key mechatronic areas of study in the overall design process and how these key areas are integrated into a successful mechatronic system design. In mechatronics, balance is paramount. The essential characteristic of a mechatronics engineer and the key to success in mechatronics is a balance between two skill sets: 1. Modeling (physical and mathematical), analysis (closed-form and numerical simulation), and control design (analog and digital) of dynamic physical systems; and 2. Experimental validation of models and analysis (for computer simulation without experimental verification is at best questionable, and at worst useless), and an understanding of the key issues in hardware implementation of designs. Figure 1 shows a diagram of the procedure for a dynamic system investigation which emphasizes this balance. This diagram serves as a guide for the study of the various mechatronic hardware systems in the courses taught at Rensselaer. When students perform a complete dynamic system investigation of a mechatronic system, they develop modeling / analysis skills and obtain knowledge of and experience with a wide variety of analog and digital sensors and actuators that will be indispensable as mechatronic design engineers in future years. This fundamental process of dynamic system investigation shall be followed in this paper. 2. INTRODUCTION: BALL ON PLATE SYSTEM The ball-on-plate balancing system, due to its inherent complexity, presents a challenging design problem. In the context of such an unconventional problem, the relevance of mechatronics design methodology becomes apparent. This paper describes the design and development of a ball-on-plate balancing system that was built from an initial design concept by a team of primarily undergraduate students as part of the course Mechatronics System Design at Rensselaer. Other ball-on-plate balancing systems have been designed in the past and some are also commercially available (TecQuipment). The existing systems are, to some extent, bulky and non-portable, and prohibitively expensive for educational purposes. The objective of this design exercise, as is typical of mechatronics design, was to make the ball-on-plate balancing system ‘better, cheaper, quicker’, i.e., to build a compact and affordable ball-on-plate system within a single semester. These objectives were met extremely well by the design that will be presented in this paper. The system described here is unique for its innovativeness in terms of the sensing and actuation schemes, which are the two most critical issues in this design. The first major challenge was to sense the ball position, accurately, reliably, and in a noncumbersome, yet inexpensive way. The various options that were considered are listed below. The relative merits and demerits are also indicated. 1. Some sort of touch sensing scheme: not enough information available, maybe hard to implement. 2. Overhead digital camera with image grabbing and processing software: expensive, requires the use of additional software, requires the use of a super-structure to mount the camera. 3. Resistive grid on the plate (a two dimensional potentiometer): limited resolution, excessive and cumbersome wiring needed. 4. Grid of infrared sensors: inexpensive, limited resolution, cumbersome, excessive wiring needed. Physical System Physical Model Mathematical Model Model Parameter Identification Actual Dynamic Behavior Compare Predicted Dynamic Behavior Make Design Decisions Design Complete Measurements, Calculations, Manufacturer's Specifications Assumptions and Engineering Judgement Physical Laws Experimental Analysis Equation Solution: Analytical and Numerical Solution Model Adequate, Performance Adequate Model Adequate, Performance Inadequate Modify or Augment Model Inadequate: Modify Which Parameters to Identify? What Tests to Perform? Figure 1.Dynamic System Investigation chart 5. 3D-motion tracking of the ball by means of an infrared-ultrasonic transponder attached to the ball, which exchanges signals with 3 remotely located towers (V-scope by Lipman Electronic Engineering Ltd.): very accurate and clean measurements, requires an additional apparatus altogether, very expensive, special attachment to the ball has to be made Based on the above listed merits and demerits associated with each choice, it was decided to pursue the option of using a touch-screen. It offered the most compact, reliable, and affordable solution. This decision was followed by extensive research pertaining to the selection and implementation of an appropriate touch-sensor. The next major challenge was to design an actuation mechanism for the plate. The plate has to rotate about its two planer body axes, to be able to balance the ball. For this design, the following options were considered: 1. Two linear actuators connected to two corners on the base of the plate that is supported by a ball and socket joint in the center, thus providing the two necessary degrees of motion: very expensive 2. Mount the plate on a gimbal ring. One motor turns the gimbal providing one degree of rotation; the other motor turns the plate relative to the ring thus providing a second degree of rotation: a non-symmetric set-up because one motor has to move the entire gimbal along with the plate thus experiencing a much higher load inertia as compared to the other motor. 3. Use of cable and pulley arrangement to turn the plate using two motors (DC or Stepper): good idea, has been used earlier 4. Use a spatial linkage mechanism to turn the plate using two motors (DC or Stepper): This comprises two four-bar parallelogram linkages, each driving one axis of rotation of the plate: an innovative method never tried before, design has to verified. Figure 2 Ball-on-plate System Assembly In this case, the final choice was selected for its uniqueness as a design never tried before. Figure 2 shows an assembly view of the entire system including the spatial linkage mechanism and the touch-screen mounted on the plate. 3. PHYSICAL SYSTEM DESCRIPTION The physical system consists of an acrylic plate, an actuation mechanism for tilting the plate about two axes, a ball position sensor, instrumentation for signal processing, and real-time control software/hardware. The entire system is mounted on an aluminium base plate and is supported by four vertical aluminium beams. The beams provide shape and support to the system and also provide mountings for the two motors. 3.1 Actuation mechanism Figure 3. The spatial linkage mechanism used for actuating the plate. Each motor (O 1 and O2) drives one axis of the plate-rotation angle and is connected to the plate by a spatial linkage mechanism (Figure 3). Referring to the schematic in Figure 5, each side of the spatial linkage mechanism (O 1-P1-A-O and O2-P2-B-O) is a four-bar parallelogram linkage. This ensures that for small motions around the equilibrium, the plate angles (q1 and q2, defined later) are equal to the corresponding motor angles (θm1 and θm2). The plate is connected to ground by means of a U-joint at O. Ball joints (at points P1, P2, A and B) connecting linkages and rods provide enough freedom of motion to ensure that the system does not bind. The motor angles are measured by highresolution optical encoders mounted on the motor shafts. A dual-axis inclinometer is mounted on the plate to measure the plate angles directly. As shall be shown later, for small motions, the motor angles correspond to the plate angles due to the kinematic constraints imposed by the parallelogram linkages. The motors used for driving the l", "title": "" }, { "docid": "2f83ca2bdd8401334877ae4406a4491c", "text": "Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. In this paper, we present the design, implementation, and performance evaluation of HAWAII, a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.", "title": "" }, { "docid": "0793d82c1246c777dce673d8f3146534", "text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.", "title": "" } ]
scidocsrr
32f88afee3e76030d362e32d0a300e56
A Distributed Sensor Data Search Platform for Internet of Things Environments
[ { "docid": "1a9e2481abf23501274e67575b1c9be6", "text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€​, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utilityâ€​ for the “majorityâ€​ and a minimum of an individual regret for the “opponentâ€​. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" } ]
[ { "docid": "46a4e4dbcb9b6656414420a908b51cc5", "text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.", "title": "" }, { "docid": "0e4334595aeec579e8eb35b0e805282d", "text": "In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmom's concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.", "title": "" }, { "docid": "df54716e3bed98a8bf510587cfcdb6cb", "text": "We propose a method to procedurally generate a familiar yet complex human artifact: the city. We are not trying to reproduce existing cities, but to generate artificial cities that are convincing and plausible by capturing developmental behavior. In addition, our results are meant to build upon themselves, such that they ought to look compelling at any point along the transition from village to metropolis. Our approach largely focuses upon land usage and building distribution for creating realistic city environments, whereas previous attempts at city modeling have mainly focused on populating road networks. Finally, we want our model to be self automated to the point that the only necessary input is a terrain description, but other high-level and low-level parameters can be specified to support artistic contributions. With the aid of agent based simulation we are generating a system of agents and behaviors that interact with one another through their effects upon a simulated environment. Our philosophy is that as each agent follows a simple behavioral rule set, a more complex behavior will tend to emerge out of the interactions between the agents and their differing rule sets. By confining our model to a set of simple rules for each class of agents, we hope to make our model extendible not only in regard to the types of structures that are produced, but also in describing the social and cultural influences prevalent in all cities.", "title": "" }, { "docid": "6cf7fb67afbbc7d396649bb3f05dd0ca", "text": "This paper details a methodology for using structured light laser imaging to create high resolution bathymetric maps of the sea floor. The system includes a pair of stereo cameras and an inclined 532nm sheet laser mounted to a remotely operated vehicle (ROV). While a structured light system generally requires a single camera, a stereo vision set up is used here for in-situ calibration of the laser system geometry by triangulating points on the laser line. This allows for quick calibration at the survey site and does not require precise jigs or a controlled environment. A batch procedure to extract the laser line from the images to sub-pixel accuracy is also presented. The method is robust to variations in image quality and moderate amounts of water column turbidity. The final maps are constructed using a reformulation of a previous bathymetric Simultaneous Localization and Mapping (SLAM) algorithm called incremental Smoothing and Mapping (iSAM). The iSAM framework is adapted from previous applications to perform sub-mapping, where segments of previously visited terrain are registered to create relative pose constraints. The resulting maps can be gridded at one centimeter and have significantly higher sample density than similar surveys using high frequency multibeam sonar or stereo vision. Results are presented for sample surveys at a submerged archaeological site and sea floor rock outcrop.", "title": "" }, { "docid": "4b9ccf92713405e7c45e8a21bb09e150", "text": "The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.", "title": "" }, { "docid": "4159f4f92adea44577319e897f10d765", "text": "While our knowledge about ancient civilizations comes mostly from studies in archaeology and history books, much can also be learned or confirmed from literary texts . Using natural language processing techniques, we present aspects of ancient China as revealed by statistical textual analysis on the Complete Tang Poems , a 2.6-million-character corpus of all surviving poems from the Tang Dynasty (AD 618 —907). Using an automatically created treebank of this corpus , we outline the semantic profiles of various poets, and discuss the role of s easons, geography, history, architecture, and colours , as observed through word selection and dependencies.", "title": "" }, { "docid": "554b82dc9820bae817bac59e81bf798a", "text": "This paper proposed a 4-channel parallel 40 Gb/s front-end amplifier (FEA) in optical receiver for parallel optical transmission system. A novel enhancement type regulated cascade (ETRGC) configuration with an active inductor is originated in this paper for the transimpedance amplifier to significantly increase the bandwidth. The technique of three-order interleaving active feedback expands the bandwidth of the gain stage of transimpedance amplifier and limiting amplifier. Experimental results show that the output swing is 210 mV (Vpp) when the input voltage varies from 5 mV to 500 mV. The power consumption of the 4-channel parallel 40 Gb/s front-end amplifier (FEA) is 370 mW with 1.8 V power supply and the chip area is 650 μm×1300 μm.", "title": "" }, { "docid": "734eb2576affeb2e34f07b5222933f12", "text": "In this paper, a novel chemical sensor system utilizing an Ion-Sensitive Field Effect Transistor (ISFET) for pH measurement is presented. Compared to other interface circuits, this system uses auto-zero amplifiers with a pingpong control scheme and array of Programmable-Gate Ion-Sensitive Field Effect Transistor (PG-ISFET). By feedback controlling the programable gates of ISFETs, the intrinsic sensor offset can be compensated for uniformly. Furthermore the chemical signal sensitivity can be enhanced due to the feedback system on the sensing node. A pingpong structure and operation protocol has been developed to realize the circuit, reducing the error and achieve continuous measurement. This system has been designed and fabricated in AMS 0.35µm, to compensate for a threshold voltage variation of ±5V and enhance the pH sensitivity to 100mV/pH.", "title": "" }, { "docid": "d48529ec9487fab939bc8120c44499d0", "text": "A new wideband circularly polarized antenna using metasurface superstrate for C-band satellite communication application is proposed in this letter. The proposed antenna consists of a planar slot coupling antenna with an array of metallic rectangular patches that can be viewed as a polarization-dependent metasurface superstrate. The metasurface is utilized to adjust axial ratio (AR) for wideband circular polarization. Furthermore, the proposed antenna has a compact structure with a low profile of 0.07λ0 ( λ0 stands for the free-space wavelength at 5.25 GHz) and ground size of 34.5×28 mm2. Measured results show that the -10-dB impedance bandwidth for the proposed antenna is 33.7% from 4.2 to 5.9 GHz, and 3-dB AR bandwidth is 16.5% from 4.9 to 5.9 GHz with an average gain of 5.8 dBi. The simulated and measured results are in good agreement to verify the good performance of the proposed antenna.", "title": "" }, { "docid": "9f5e4d52df5f13a80ccdb917a899bb9e", "text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.", "title": "" }, { "docid": "9d316fae0354f3eb28540ea013b4f8a4", "text": "Natural language makes considerable use of recurrent formulaic patterns of words. This article triangulates the construct of formula from corpus linguistic, psycholinguistic, and educational perspectives. It describes the corpus linguistic extraction of pedagogically useful formulaic sequences for academic speech and writing. It determines English as a second language (ESL) and English for academic purposes (EAP) instructors’ evaluations of their pedagogical importance. It summarizes three experiments which show that different aspects of formulaicity affect the accuracy and fluency of processing of these formulas in native speakers and in advanced L2 learners of English. The language processing tasks were selected to sample an ecologically valid range of language processing skills: spoken and written, production and comprehension. Processing in all experiments was affected by various corpus-derived metrics: length, frequency, and mutual information (MI), but to different degrees in the different populations. For native speakers, it is predominantly the MI of the formula which determines processability; for nonnative learners of the language, it is predominantly the frequency of the formula. The implications of these findings are discussed for (a) the psycholinguistic validity of corpus-derived formulas, (b) a model of their acquisition, (c) ESL and EAP instruction and the prioritization of which formulas to teach.", "title": "" }, { "docid": "769ba1ac260f54ea64b83d34b97fc868", "text": "Truck platooning for which multiple trucks follow at a short distance is considered a near-term truck automation opportunity, with the potential to reduce fuel consumption. Short following distances and increasing automation make it hard for a driver to be the backup if the system fails. The EcoTwin consortium successfully demonstrated a two truck platooning system with trucks following at 20 meters distance at the public road, in which the driver is the backup. The ambition of the consortium is to increase the truck automation and to reduce the following distance, which requires a new fail-operational truck platooning architecture. This paper presents a level 2+ platooning system architecture, which is fail-operational for a single failure, and the corresponding process to obtain it. First insights in the existing two truck platooning system are obtained by analyzing its key aspects, being utilization, latency, reliability, and safety. Using these insights, candidate level 2+ platooning system architectures are defined from which the most suitable truck platooning architecture is selected. Future work is the design and implementation of a prototype, based on the presented level 2+ platooning system architecture.", "title": "" }, { "docid": "b5a8577b02f7f44e9fc5abd706e096d4", "text": "Automotive Safety Integrity Level (ASIL) decomposition is a technique presented in the ISO 26262: Road Vehicles Functional Safety standard. Its purpose is to satisfy safety-critical requirements by decomposing them into less critical ones. This procedure requires a system-level validation, and the elements of the architecture to which the decomposed requirements are allocated must be analyzed in terms of Common-Cause Faults (CCF). In this work, we present a generic method for a bottomup ASIL decomposition, which can be used during the development of a new product. The system architecture is described in a three-layer model, from which fault trees are generated, formed by the application, resource, and physical layers and their mappings. A CCF analysis is performed on the fault trees to verify the absence of possible common faults between the redundant elements and to validate the ASIL decomposition.", "title": "" }, { "docid": "d9e4a4303a7949b51510cf95098e4248", "text": "Recent increased regulatory scrutiny concerning subvisible particulates (SbVPs) in parenteral formulations of biologics has led to the publication of numerous articles about the sources, characteristics, implications, and approaches to monitoring and detecting SbVPs. Despite varying opinions on the level of associated risks and method of regulation, nearly all industry scientists and regulators agree on the need for monitoring and reporting visible and subvisible particles. As prefillable drug delivery systems have become a prominent packaging option, silicone oil, a common primary packaging lubricant, may play a role in the appearance of particles. The goal of this article is to complement the current SbVP knowledge base with new insights into the evolution of silicone-oil-related particulates and their interactions with components in prefillable systems. We propose a \"toolbox\" for improved silicone-oil-related particulate detection and enumeration, and discuss the benefits and limitations of approaches for lowering and controlling silicone oil release in parenterals. Finally, we present surface cross-linking of silicone as the recommended solution for achieving significant SbVP reduction without negatively affecting functional performance.", "title": "" }, { "docid": "becd45d50ead03dd5af399d5618f1ea3", "text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.", "title": "" }, { "docid": "1b394e01c8e2ea7957c62e3e0b15fbd7", "text": "In this paper, we present results on the implementation of a hierarchical quaternion based attitude and trajectory controller for manual and autonomous flights of quadrotors. Unlike previous papers on using quaternion representation, we use the nonlinear complementary filter that estimates the attitude in quaternions and as such does not involve Euler angles or rotation matrices. We show that for precise trajectory tracking, the resulting attitude error dynamics of the system is non-autonomous and is almost globally asymptotically and locally exponentially stable under the proposed control law. We also show local exponential stability of the translational dynamics under the proposed trajectory tracking controller which sits at the highest level of the hierarchy. Thus by input-to-state stability, the entire system is locally exponentially stable. The quaternion based observer and controllers are available as open-source.", "title": "" }, { "docid": "4437a0241b825fddd280517b9ae3565a", "text": "The levels of pregnenolone, dehydroepiandrosterone (DHA), androstenedione, testosterone, dihydrotestosterone (DHT), oestrone, oestradiol, cortisol and luteinizing hormone (LH) were measured in the peripheral plasma of a group of young, apparently healthy males before and after masturbation. The same steroids were also determined in a control study, in which the psychological antipation of masturbation was encouraged, but the physical act was not carried out. The plasma levels of all steroids were significantly increased after masturbation, whereas steroid levels remained unchanged in the control study. The most marked changes after masturbation were observed in pregnenolone and DHA levels. No alterations were observed in the plasma levels of LH. Both before and after masturbation plasma levels of testosterone were significantly correlated to those of DHT and oestradiol, but not to those of the other steroids studied. On the other hand, cortisol levels were significantly correlated to those of pregnenolone, DHA, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT in seminal plasma were also estimated; they were all significantly correlated to the levels of the corresponding steroid in the systemic blood withdrawn both before and after masturbation. As a practical consequence, the results indicate that whenever both blood and semen are analysed, blood sampling must precede semen collection.", "title": "" }, { "docid": "33cab0ec47af5e40d64e34f8ffc7dd6f", "text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.", "title": "" }, { "docid": "6c72b38246e35d1f49d7f55e89b42f21", "text": "The success of IT project related to numerous factors. It had an important significance to find the critical factors for the success of project. Based on the general analysis of IT project management, this paper analyzed some factors of project management for successful IT project from the angle of modern project management. These factors include project participators, project communication, collaboration, and information sharing mechanism as well as project management process. In the end, it analyzed the function of each factor for a successful IT project. On behalf of the collective goal, by the use of the favorable project communication and collaboration, the project participants carry out successfully to the management of the process, which is significant to the project, and make project achieve success eventually.", "title": "" }, { "docid": "95db5921ba31588e962ffcd8eb6469b0", "text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this", "title": "" } ]
scidocsrr
d8651538ed422dc108590164f96bf59f
Towards a Semantic Driven Framework for Smart Grid Applications: Model-Driven Development Using CIM, IEC 61850 and IEC 61499
[ { "docid": "e4bc807f5d5a9f81fdfadd3632ffa5d9", "text": "openview network node manager designing and implementing an enterprise solution PDF high availability in websphere messaging solutions PDF designing web interfaces principles and patterns for rich interactions PDF pivotal certified spring enterprise integration specialist exam a study guide PDF active directory designing deploying and running active directory PDF application architecture for net designing applications and services patterns & practices PDF big data analytics from strategic planning to enterprise integration with tools techniques nosql and graph PDF designing and building security operations center PDF patterns of enterprise application architecture PDF java ee and net interoperability integration strategies patterns and best practices PDF making healthy places designing and building for health well-being and sustainability PDF architectural ceramics for the studio potter designing building installing PDF xml for data architects designing for reuse and integration the morgan kaufmann series in data management systems PDF", "title": "" }, { "docid": "3be99b1ef554fde94742021e4782a2aa", "text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.", "title": "" }, { "docid": "7e6bbd25c49b91fd5dc4248f3af918a7", "text": "Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.", "title": "" }, { "docid": "152c11ef8449d53072bbdb28432641fa", "text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.", "title": "" }, { "docid": "e3b1e52066d20e7c92e936cdb72cc32b", "text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.", "title": "" } ]
[ { "docid": "7e848e98909c69378f624ce7db31dbfa", "text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.", "title": "" }, { "docid": "8e88621c949e0df5a7eda810bfac113d", "text": "About one fourth of patients with bipolar disorders (BD) have depressive episodes with a seasonal pattern (SP) coupled to a more severe disease. However, the underlying genetic influence on a SP in BD remains to be identified. We studied 269 BD Caucasian patients, with and without SP, recruited from university-affiliated psychiatric departments in France and performed a genetic single-marker analysis followed by a gene-based analysis on 349 single nucleotide polymorphisms (SNPs) spanning 21 circadian genes and 3 melatonin pathway genes. A SP in BD was nominally associated with 14 SNPs identified in 6 circadian genes: NPAS2, CRY2, ARNTL, ARNTL2, RORA and RORB. After correcting for multiple testing, using a false discovery rate approach, the associations remained significant for 5 SNPs in NPAS2 (chromosome 2:100793045-100989719): rs6738097 (pc = 0.006), rs12622050 (pc = 0.006), rs2305159 (pc = 0.01), rs1542179 (pc = 0.01), and rs1562313 (pc = 0.02). The gene-based analysis of the 349 SNPs showed that rs6738097 (NPAS2) and rs1554338 (CRY2) were significantly associated with the SP phenotype (respective Empirical p-values of 0.0003 and 0.005). The associations remained significant for rs6738097 (NPAS2) after Bonferroni correction. The epistasis analysis between rs6738097 (NPAS2) and rs1554338 (CRY2) suggested an additive effect. Genetic variations in NPAS2 might be a biomarker for a seasonal pattern in BD.", "title": "" }, { "docid": "51f90bbb8519a82983eec915dd643d34", "text": "The growth of vehicles in Yogyakarta Province, Indonesia is not proportional to the growth of roads. This problem causes severe traffic jam in many main roads. Common traffic anomalies detection using surveillance camera requires manpower and costly, while traffic anomalies detection with crowdsourcing mobile applications are mostly owned by private. This research aims to develop a real-time traffic classification by harnessing the power of social network data, Twitter. In this study, Twitter data are processed to the stages of preprocessing, feature extraction, and tweet classification. This study compares classification performance of three machine learning algorithms, namely Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (DT). Experimental results show that SVM algorithm produced the best performance among the other algorithms with 99.77% and 99.87% of classification accuracy in balanced and imbalanced data, respectively. This research implies that social network service may be used as an alternative source for traffic anomalies detection by providing information of traffic flow condition in real-time.", "title": "" }, { "docid": "2d7de6d43997449f4ad922bc71e385ad", "text": "A microwave duplexer with high isolation is presented in this paper. The device is based on triple-mode filters that are built using silver-plated ceramic cuboids. To create a six-pole, six-transmission-zero filter in the DCS-1800 band, which is utilized in mobile communications, two cuboids are cascaded. To shift spurious harmonics, low dielectric caps are placed on the cuboid faces. These caps push the first cuboid spurious up in frequency by around 340 MHz compared to the uncapped cuboid, allowing a 700-MHz spurious free window. To verify the design, a DCS-1800 duplexer with 75-MHz widebands is built. It achieves around 1 dB of insertion loss for both the receive and transmit ports with around 70 dB of mutual isolation within only 20-MHz band separation, using a volume of only 30 cm3 .", "title": "" }, { "docid": "78cae00cd81dc1f519d25ff6cb8f41c8", "text": "We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds---e.g. the characteristic silverlining and the \"whiteness\" of the inner body---challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network's ability to learn faster and predict with higher accuracy while using fewer coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds to minutes. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, for high-quality production of animated content.", "title": "" }, { "docid": "e89123df2d60f011a3c6057030c42167", "text": "Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.", "title": "" }, { "docid": "7b94828573579b393a371d64d5125f64", "text": "This paper presents an artificial neural network(ANN) approach to electric load forecasting. The ANN is used to learn the relationship among past, current and future temperatures and loads. In order to provide the forecasted load, the ANN interpolates among the load and temperature data in a training data set. The average absolute errors of the one-hour and 24-hour ahead forecasts in our test on actual utility data are shown to be 1.40% and 2.06%, respectively. This compares with an average error of 4.22% for 24hour ahead forecasts with a currently used forecasting technique applied to the same data.", "title": "" }, { "docid": "62c96348c818cbe9f1aa72df5ca717e6", "text": "BACKGROUND\nChocolate consumption has long been associated with enjoyment and pleasure. Popular claims confer on chocolate the properties of being a stimulant, relaxant, euphoriant, aphrodisiac, tonic and antidepressant. The last claim stimulated this review.\n\n\nMETHOD\nWe review chocolate's properties and the principal hypotheses addressing its claimed mood altering propensities. We distinguish between food craving and emotional eating, consider their psycho-physiological underpinnings, and examine the likely 'positioning' of any effect of chocolate to each concept.\n\n\nRESULTS\nChocolate can provide its own hedonistic reward by satisfying cravings but, when consumed as a comfort eating or emotional eating strategy, is more likely to be associated with prolongation rather than cessation of a dysphoric mood.\n\n\nLIMITATIONS\nThis review focuses primarily on clarifying the possibility that, for some people, chocolate consumption may act as an antidepressant self-medication strategy and the processes by which this may occur.\n\n\nCONCLUSIONS\nAny mood benefits of chocolate consumption are ephemeral.", "title": "" }, { "docid": "899349ba5a7adb31f5c7d24db6850a82", "text": "Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns.\n We extend blue noise sampling to multiple classes where each individual class as well as their unions exhibit blue noise characteristics. We propose two flavors of algorithms to generate such multi-class blue noise samples, one extended from traditional Poisson hard disk sampling for explicit control of sample spacing, and another based on our soft disk sampling for explicit control of sample count. Our algorithms support uniform and adaptive sampling, and are applicable to both discrete and continuous sample space in arbitrary dimensions. We study characteristics of samples generated by our methods, and demonstrate applications in object placement, sensor layout, and color stippling.", "title": "" }, { "docid": "3a651ab1f8c05cfae51da6a14f6afef8", "text": "The taxonomical relationship of Cylindrospermopsis raciborskii and Raphidiopsis mediterranea was studied by morphological and 16S rRNA gene diversity analyses of natural populations from Lake Kastoria, Greece. Samples were obtained during a bloom (23,830 trichomes mL ) in August 2003. A high diversity of apical cell, trichome, heterocyte and akinete morphology, trichome fragmentation and reproduction was observed. Trichomes were grouped into three dominant morphotypes: the typical and the non-heterocytous morphotype of C. raciborskii and the typical morphotype of R. mediterranea. A morphometric comparison of the dominant morphotypes showed significant differences in mean values of cell and trichome sizes despite the high overlap in the range of the respective size values. Additionally, two new morphotypes representing developmental stages of the species are described while a new mode of reproduction involving a structurally distinct reproductive cell is described for the first time in planktic Nostocales. A putative life-cycle, common for C. raciborskii and R. mediterranea is proposed revealing that trichome reproduction of R. mediterranea gives rise both to R. mediterranea and C. raciborskii non-heterocytous morphotypes. The phylogenetic analysis of partial 16S rRNA gene (ca. 920 bp) of the co-existing Cylindrospermopsis and Raphidiopsis morphotypes revealed only one phylotype which showed 99.54% similarity to R. mediterranea HB2 (China) and 99.19% similarity to C. raciborskii form 1 (Australia). We propose that all morphotypes comprised stages of the life cycle of C. raciborkii whereas R. mediterranea from Lake Kastoria (its type locality) represents non-heterocytous stages of Cylindrospermopsis complex life cycle. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d2a94d4dc8d8d5d71fc5f838f692544f", "text": "This introductory chapter reviews the emergence, classification, and contemporary examples of cultural robots: social robots that are shaped by, producers of, or participants in culture. We review the emergence of social robotics as a field, and then track early references to the terminology and key lines of inquiry of Cultural Robotics. Four categories of the integration of culture with robotics are outlined; and the content of the contributing chapters following this introductory chapter are summarised within these categories.", "title": "" }, { "docid": "7c427a383fe1e95f33049335371a84e4", "text": "Gene set analysis is moving towards considering pathway topology as a crucial feature. Pathway elements are complex entities such as protein complexes, gene family members and chemical compounds. The conversion of pathway topology to a gene/protein networks (where nodes are a simple element like a gene/protein) is a critical and challenging task that enables topology-based gene set analyses. Unfortunately, currently available R/Bioconductor packages provide pathway networks only from single databases. They do not propagate signals through chemical compounds and do not differentiate between complexes and gene families. Here we present graphite, a Bioconductor package addressing these issues. Pathway information from four different databases is interpreted following specific biologically-driven rules that allow the reconstruction of gene-gene networks taking into account protein complexes, gene families and sensibly removing chemical compounds from the final graphs. The resulting networks represent a uniform resource for pathway analyses. Indeed, graphite provides easy access to three recently proposed topological methods. The graphite package is available as part of the Bioconductor software suite. graphite is an innovative package able to gather and make easily available the contents of the four major pathway databases. In the field of topological analysis graphite acts as a provider of biological information by reducing the pathway complexity considering the biological meaning of the pathway elements.", "title": "" }, { "docid": "c1cdb2ab2a594e7fbb1dfdb261f0910c", "text": "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.", "title": "" }, { "docid": "ab3fb8980fa8d88e348f431da3d21ed4", "text": "PIECE (Plant Intron Exon Comparison and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in plant genomes. Recently, we updated PIECE to a new version, PIECE 2.0 (http://probes.pw.usda.gov/piece or http://aegilops.wheat.ucdavis.edu/piece). PIECE 2.0 contains annotated genes from 49 sequenced plant species as compared to 25 species in the previous version. In the current version, we also added several new features: (i) a new viewer was developed to show phylogenetic trees displayed along with the structure of individual genes; (ii) genes in the phylogenetic tree can now be also grouped according to KOG (The annotation of Eukaryotic Orthologous Groups) and KO (KEGG Orthology) in addition to Pfam domains; (iii) information on intronless genes are now included in the database; (iv) a statistical summary of global gene structure information for each species and its comparison with other species was added; and (v) an improved GSDraw tool was implemented in the web server to enhance the analysis and display of gene structure. The updated PIECE 2.0 database will be a valuable resource for the plant research community for the study of gene structure and evolution.", "title": "" }, { "docid": "433340f3392257a8ac830215bf5e3ef2", "text": "A compact Substrate Integrated Waveguide (SIW) Leaky-Wave Antenna (LWA) is proposed. Internal vias are inserted in the SIW in order to have narrow walls, and so reducing the size of the SIW-LWA, the new structure is called Slow Wave - Substrate Integrated Waveguide - Leaky Wave Antenna (SW-SIW-LWA), since inserting the vias induce the SW effect. After designing the antenna and simulating with HFSS a reduction of 30% of the transverse side of the antenna is attained while maintaining an acceptable gain. Other parameters like the radiation efficiency, Gain, directivity, and radiation pattern are analyzed. Finally a Comparison of our miniaturization technique with Half-Mode Substrate Integrated Waveguide (HMSIW) technique realized in recent articles is done, shows that SW-SIW-LWA technique could be a good candidate for SIW miniaturization.", "title": "" }, { "docid": "1b24b5d1936377c3659273a68aafeb35", "text": "In this paper, hand dorsal images acquired under infrared light are used to design an accurate personal authentication system. Each of the image is segmented into palm dorsal and fingers which are subsequently used to extract palm dorsal veins and infrared hand geometry features respectively. A new quality estimation algorithm is proposed to estimate the quality of palm dorsal which assigns low values to the pixels containing hair or skin texture. Palm dorsal is enhanced using filtering. For vein extraction, information provided by the enhanced image and the vein quality is consolidated using a variational approach. The proposed vein extraction can handle the issues of hair, skin texture and variable width veins so as to extract the genuine veins accurately. Several post processing techniques are introduced in this paper for accurate feature extraction of infrared hand geometry features. Matching scores are obtained by matching palm dorsal veins and infrared hand geometry features. These are eventually fused for authentication. For performance evaluation, a database of 1500 hand images acquired from 300 different hands is created. Experimental results demonstrate the superiority of the proposed system over existing", "title": "" }, { "docid": "49cf26b6c6dde96df9009a68758ee506", "text": "Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with ∗Corresponding author. Tel.: +86 27 87558918 Email addresses: Yang Xiao@hust.edu.cn (Yang Xiao), chenjun2015@hust.edu.cn (Jun Chen), yancheng wang@hust.edu.cn (Yancheng Wang), zgcao@hust.edu.cn (Zhiguo Cao), zhouty@ihpc.a-star.edu.sg (Joey Tianyi Zhou), xbai@hust.edu.cn (Xiang Bai) Preprint submitted to Information Sciences December 31, 2018 ar X iv :1 80 6. 11 26 9v 3 [ cs .C V ] 2 7 D ec 2 01 8 respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.", "title": "" }, { "docid": "cf02d97cdcc1a4be51ed0af2af771b7d", "text": "Bowen's disease is a squamous cell carcinoma in situ and has the potential to progress to a squamous cell carcinoma. The authors treated two female patients (a 39-year-old and a 41-year-old) with Bowen's disease in the vulva area using topical photodynamic therapy (PDT), involving the use of 5-aminolaevulinic acid and a light-emitting diode device. The light was administered at an intensity of 80 mW/cm(2) for a dose of 120 J/cm(2) biweekly for 6 cycles. The 39-year-old patient showed excellent clinical improvement, but the other patient achieved only a partial response. Even though one patient underwent a total excision 1 year later due to recurrence, both patients were satisfied with the cosmetic outcomes of this therapy and the partial improvement over time. The common side effect of PDT was a stinging sensation. PDT provides a relatively effective and useful alternative treatment for Bowen's disease in the vulva area.", "title": "" }, { "docid": "e33dd9c497488747f93cfcc1aa6fee36", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" }, { "docid": "33b129cb569c979c81c0cb1c0a5b9594", "text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.", "title": "" } ]
scidocsrr
ddbde03fe2445a7daad4ba7f9c09aec8
LBANN: livermore big artificial neural network HPC toolkit
[ { "docid": "091279f6b95594f9418591264d0d7e3c", "text": "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several offthe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively). Appearing in Proceedings of the 14 International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors.", "title": "" } ]
[ { "docid": "1d7035cc5b85e13be6ff932d39740904", "text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor", "title": "" }, { "docid": "1dbaa72cd95c32d1894750357e300529", "text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.", "title": "" }, { "docid": "738555e605ee2b90ff99bef6d434162d", "text": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool1 are available to the research community.", "title": "" }, { "docid": "3a6f2d4fa9531d9bc8c2dbf2110990f3", "text": "In a Grid Connected Photo-voltaic System (GCPVS) maximum power is to be drawn from the PV array and has to be injected into the Grid, using suitable maximum power point tracking algorithms, converter topologies and control algorithms. Usually converter topologies such as buck, boost, buck-boost, sepic, flyback, push pull etc. are used. Loss factors such as irradiance, temperature, shading effects etc. have zero loss in a two stage system, but additional converter used will lead to an extra loss which makes the single stage system more efficient when compared to a two stage systems, in applications like standalone and grid connected renewable energy systems. In Cuk converter the source and load side are separated via a capacitor thus energy transfer from the source side to load side occurs through this capacitor which leads to less current ripples at the load side. Thus in this paper, a Simulink model of two stage GCPVS using Cuk converter is being designed, simulated and is compared with a GCPVS using Boost Converter. For tracking the maximum power point the most common and accurate method called incremental conductance algorithm is used. And the inverter control is done using the dc bus voltage algorithm.", "title": "" }, { "docid": "1e7f14531caad40797594f9e4c188697", "text": "The Drosophila melanogaster germ plasm has become the paradigm for understanding both the assembly of a specific cytoplasmic localization during oogenesis and its function. The posterior ooplasm is necessary and sufficient for the induction of germ cells. For its assembly, localization of gurken mRNA and its translation at the posterior pole of early oogenic stages is essential for establishing the posterior pole of the oocyte. Subsequently, oskar mRNA becomes localized to the posterior pole where its translation leads to the assembly of a functional germ plasm. Many gene products are required for producing the posterior polar plasm, but only oskar, tudor, valois, germcell-less and some noncoding RNAs are required for germ cell formation. A key feature of germ cell formation is the precocious segregation of germ cells, which isolates the primordial germ cells from mRNA turnover, new transcription, and continued cell division. nanos is critical for maintaining the transcription quiescent state and it is required to prevent transcription of Sex-lethal in pole cells. In spite of the large body of information about the formation and function of the Drosophila germ plasm, we still do not know what specifically is required to cause the pole cells to be germ cells. A series of unanswered problems is discussed in this chapter.", "title": "" }, { "docid": "fc172716fe01852d53d0ae5d477f3afc", "text": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.", "title": "" }, { "docid": "2da528dcbf7a97875e0a5a1a79cbaa21", "text": "Convolutional neural net-like structures arise from training an unstructured deep belief network (DBN) using structured simulation data of 2-D Ising Models at criticality. The convolutional structure arises not just because such a structure is optimal for the task, but also because the belief network automatically engages in block renormalization procedures to “rescale” or “encode” the input, a fundamental approach in statistical mechanics. This work primarily reviews the work of Mehta et al. [1], the group that first made the discovery that such a phenomenon occurs, and replicates their results training a DBN on Ising models, confirming that weights in the DBN become spatially concentrated during training on critical Ising samples.", "title": "" }, { "docid": "6f9bca88fbb59e204dd8d4ae2548bd2d", "text": "As the biomechanical literature concerning softball pitching is evolving, there are no data to support the mechanics of softball position players. Pitching literature supports the whole kinetic chain approach including the lower extremity in proper throwing mechanics. The purpose of this project was to examine the gluteal muscle group activation patterns and their relationship with shoulder and elbow kinematics and kinetics during the overhead throwing motion of softball position players. Eighteen Division I National Collegiate Athletic Association softball players (19.2 ± 1.0 years; 68.9 ± 8.7 kg; 168.6 ± 6.6 cm) who were listed on the active playing roster volunteered. Electromyographic, kinematic, and kinetic data were collected while players caught a simulated hit or pitched ball and perform their position throw. Pearson correlation revealed a significant negative correlation between non-throwing gluteus maximus during the phase of maximum external rotation to maximum internal rotation (MIR) and elbow moments at ball release (r = −0.52). While at ball release, trunk flexion and rotation both had a positive relationship with shoulder moments at MIR (r = 0.69, r = 0.82, respectively) suggesting that the kinematic actions of the pelvis and trunk are strongly related to the actions of the shoulder during throwing.", "title": "" }, { "docid": "a6b65ee65eea7708b4d25fb30444c8e6", "text": "The Intelligent vehicle is experiencing revolutionary growth in research and industry, but it still suffers from a lot of security vulnerabilities. Traditional security methods are incapable of providing secure IV, mainly in terms of communication. In IV communication, major issues are trust and data accuracy of received and broadcasted reliable data in the communication channel. Blockchain technology works for the cryptocurrency, Bitcoin which has been recently used to build trust and reliability in peer-to-peer networks with similar topologies to IV Communication world. IV to IV, communicate in a decentralized manner within communication networks. In this paper, we have proposed, Trust Bit (TB) for IV communication among IVs using Blockchain technology. Our proposed trust bit provides surety for each IVs broadcasted data, to be secure and reliable in every particular networks. Our Trust Bit is a symbol of trustworthiness of vehicles behavior, and vehicles legal and illegal action. Our proposal also includes a reward system, which can exchange some TB among IVs, during successful communication. For the data management of this trust bit, we have used blockchain technology in the vehicular cloud, which can store all Trust bit details and can be accessed by IV anywhere and anytime. Our proposal provides secure and reliable information. We evaluate our proposal with the help of IV communication on intersection use case which analyzes a variety of trustworthiness between IVs during communication.", "title": "" }, { "docid": "b1b56020802d11d1f5b2badb177b06b9", "text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.", "title": "" }, { "docid": "cdfec1296a168318f773bb7ef0bfb307", "text": "Today service markets are becoming business reality as for example Amazon's EC2 spot market. However, current research focusses on simplified consumer-provider service markets only. Taxes are an important market element which has not been considered yet for service markets. This paper introduces and evaluates the effects of tax systems for IaaS markets which trade virtual machines. As a digital good with well defined characteristics like storage or processing power a virtual machine can be taxed by the tax authority using different tax systems. Currently the value added tax is widely used for taxing virtual machines only. The main contribution of the paper is the so called CloudTax component, a framework to simulate and evaluate different tax systems on service markets. It allows to introduce economical principles and phenomenons like the Laffer Curve or tax incidences. The CloudTax component is based on the CloudSim simulation framework using the Bazaar-Extension for comprehensive economic simulations. We show that tax mechanisms strongly influence the efficiency of negotiation processes in the Cloud market.", "title": "" }, { "docid": "73f8a5e5e162cc9b1ed45e13a06e78a5", "text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.", "title": "" }, { "docid": "70593bbda6c88f0ac10e26768d74b3cd", "text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that o‰en results in multiple complications. Risk prediction and pro€ling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications a‰er the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Speci€cally, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the di‚erent risk factors, and (3) between the risk factor selection paŠerns. Œe method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. Œe proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the di‚erent risks and risk factors are also identi€ed. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signi€cant margin. Furthermore, we show that the risk associations learned and the risk factors identi€ed lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;", "title": "" }, { "docid": "c3ef6598f869e40fc399c89baf0dffd8", "text": "In this article, a novel hybrid genetic algorithm is proposed. The selection operator, crossover operator and mutation operator of the genetic algorithm have effectively been improved according to features of Sudoku puzzles. The improved selection operator has impaired the similarity of the selected chromosome and optimal chromosome in the current population such that the chromosome with more abundant genes is more likely to participate in crossover; such a designed crossover operator has possessed dual effects of self-experience and population experience based on the concept of tactfully combining PSO, thereby making the whole iterative process highly directional; crossover probability is a random number and mutation probability changes along with the fitness value of the optimal solution in the current population such that more possibilities of crossover and mutation could then be considered during the algorithm iteration. The simulation results show that the convergence rate and stability of the novel algorithm has significantly been improved.", "title": "" }, { "docid": "8222f8eae81c954e8e923cbd883f8322", "text": "Work stealing is a promising approach to constructing multithreaded program runtimes of parallel programming languages. This paper presents HERMES, an energy-efficient work-stealing language runtime. The key insight is that threads in a work-stealing environment -- thieves and victims - have varying impacts on the overall program running time, and a coordination of their execution \"tempo\" can lead to energy efficiency with minimal performance loss. The centerpiece of HERMES is two complementary algorithms to coordinate thread tempo: the workpath-sensitive algorithm determines tempo for each thread based on thief-victim relationships on the execution path, whereas the workload-sensitive algorithm selects appropriate tempo based on the size of work-stealing deques. We construct HERMES on top of Intel Cilk Plus's runtime, and implement tempo adjustment through standard Dynamic Voltage and Frequency Scaling (DVFS). Benchmarks running on HERMES demonstrate an average of 11-12% energy savings with an average of 3-4% performance loss through meter-based measurements over commercial CPUs.", "title": "" }, { "docid": "26886ff5cb6301dd960e79d8fb3f9362", "text": "We propose a preprocessing method to improve the performance of Principal Component Analysis (PCA) for classification problems composed of two steps; in the first step, the weight of each feature is calculated by using a feature weighting method. Then the features with weights larger than a predefined threshold are selected. The selected relevant features are then subject to the second step. In the second step, variances of features are changed until the variances of the features are corresponded to their importance. By taking the advantage of step 2 to reveal the class structure, we expect that the performance of PCA increases in classification problems. Results confirm the effectiveness of our proposed methods.", "title": "" }, { "docid": "21a45086509bd0edb1b578a8a904bf50", "text": "Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.", "title": "" }, { "docid": "063389c654f44f34418292818fc781e7", "text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.", "title": "" }, { "docid": "c760e6db820733dc3f57306eef81e5c9", "text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.", "title": "" }, { "docid": "2c0770b42050c4d67bfc7e723777baa6", "text": "We describe a framework for understanding how age-related changes in adult development affect work motivation, and, building on recent life-span theories and research on cognitive abilities, personality, affect, vocational interests, values, and self-concept, identify four intraindividual change trajectories (loss, gain, reorganization, and exchange). We discuss implications of the integrative framework for the use and effectiveness of different motivational strategies with midlife and older workers in a variety of jobs, as well as abiding issues and future research directions.", "title": "" } ]
scidocsrr
8c2975ba60444927e58c923e7e5a9a71
Empirical evidence for resource-rational anchoring and adjustment.
[ { "docid": "637a7d7e0c33b6f63f17f9ec77add5a6", "text": "In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.", "title": "" }, { "docid": "68477e8a53020dd0b98014a6eab96255", "text": "This article reviews a diverse set of proposals for dual processing in higher cognition within largely disconnected literatures in cognitive and social psychology. All these theories have in common the distinction between cognitive processes that are fast, automatic, and unconscious and those that are slow, deliberative, and conscious. A number of authors have recently suggested that there may be two architecturally (and evolutionarily) distinct cognitive systems underlying these dual-process accounts. However, it emerges that (a) there are multiple kinds of implicit processes described by different theorists and (b) not all of the proposed attributes of the two kinds of processing can be sensibly mapped on to two systems as currently conceived. It is suggested that while some dual-process theories are concerned with parallel competing processes involving explicit and implicit knowledge systems, others are concerned with the influence of preconscious processes that contextualize and shape deliberative reasoning and decision-making.", "title": "" } ]
[ { "docid": "6b3cdd024b6232e5226cae2c15463509", "text": "Blended learning involves the combination of two fields of concern: education and educational technology. To gain the scholarly recognition from educationists, it is necessary to revisit its models and educational theory underpinned. This paper respond to this issue by reviewing models related to blended learning based on two prominent educational theorists, Maslow’s and Vygotsky’s view. Four models were chosen due to their holistic ideas or vast citations related to blended learning: (1) E-Moderation Model emerging from Open University of UK; (2) Learning Ecology Model by Sun Microsoft System; (3) Blended Learning Continuum in University of Glamorgan; and (4) Inquirybased Framework by Garrison and Vaughan. The discussion of each model concerning pedagogical impact to learning and teaching are made. Critical review of the models in accordance to Maslow or Vygotsky is argued. Such review is concluded with several key principles for the design and practice in", "title": "" }, { "docid": "65840e476736336c9cb0fa18f8321492", "text": "Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step —adding a constant shift to the input data— to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. In order to guarantee reliability, we posit that methods should fulfill input invariance, the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy input invariance result in misleading attribution.", "title": "" }, { "docid": "f82972fcda26b339eb078bbcaad26cdc", "text": "Colorectal cancer (CRC) shows variable underlying molecular changes with two major mechanisms of genetic instability: chromosomal instability and microsatellite instability. This review aims to delineate the different pathways of colorectal carcinogenesis and provide an overview of the most recent advances in molecular pathological classification systems for colorectal cancer. Two molecular pathological classification systems for CRC have recently been proposed. Integrated molecular analysis by The Cancer Genome Atlas project is based on a wide-ranging genomic and transcriptomic characterisation study of CRC using array-based and sequencing technologies. This approach classified CRC into two major groups consistent with previous classification systems: (1) ∼16 % hypermutated cancers with either microsatellite instability (MSI) due to defective mismatch repair (∼13 %) or ultramutated cancers with DNA polymerase epsilon proofreading mutations (∼3 %); and (2) ∼84 % non-hypermutated, microsatellite stable (MSS) cancers with a high frequency of DNA somatic copy number alterations, which showed common mutations in APC, TP53, KRAS, SMAD4, and PIK3CA. The recent Consensus Molecular Subtypes (CMS) Consortium analysing CRC expression profiling data from multiple studies described four CMS groups: almost all hypermutated MSI cancers fell into the first category CMS1 (MSI-immune, 14 %) with the remaining MSS cancers subcategorised into three groups of CMS2 (canonical, 37 %), CMS3 (metabolic, 13 %) and CMS4 (mesenchymal, 23 %), with a residual unclassified group (mixed features, 13 %). Although further research is required to validate these two systems, they may be useful for clinical trial designs and future post-surgical adjuvant treatment decisions, particularly for tumours with aggressive features or predicted responsiveness to immune checkpoint blockade.", "title": "" }, { "docid": "6daa93f2a7cfaaa047ecdc04fb802479", "text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.", "title": "" }, { "docid": "e9046bfaf5488138ca5c2ff0067646a8", "text": "In this paper we consider several new versions of approximate string matching with gaps. The main characteristic of these new versions is the existence of gaps in the matching of a given pattern in a text. Algorithms are devised for each version and their time and space complexities are stated. These specific versions of approximate string matching have various applications in computerized music analysis. CR Classification: F.2.2", "title": "" }, { "docid": "ab07e92f052a03aac253fabadaea4ab3", "text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.", "title": "" }, { "docid": "3a32bb2494edefe8ea28a83dad1dc4c4", "text": "Objective: The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. Methods: The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. Results: On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. Conclusion: The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. Significance: The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.", "title": "" }, { "docid": "59639429e45dc75e0b8db773d112f994", "text": "Vector modulators are a key component in phased array antennas and communications systems. The paper describes a novel design methodology for a bi-directional, reflection-type balanced vector modulator using metal-oxide-semiconductor field-effect (MOS) transistors as active loads, which provides an improved constellation quality. The fabricated IC occupies 787 × 1325 μm2 and exhibits a minimum transmission loss of 9 dB and return losses better than 14 dB. As an application example, its use in a 16-QAM modulator is verified.", "title": "" }, { "docid": "b2444538456800e84df8288f4a482775", "text": "Thermoelectric generators (TEGs) provide a unique way for harvesting thermal energy. These devices are compact, durable, inexpensive, and scalable. Unfortunately, the conversion efficiency of TEGs is low. This requires careful design of energy harvesting systems including the interface circuitry between the TEG module and the load, with the purpose of minimizing power losses. In this paper, it is analytically shown that the traditional approach for estimating the internal resistance of TEGs may result in a significant loss of harvested power. This drawback comes from ignoring the dependence of the electrical behavior of TEGs on their thermal behavior. Accordingly, a systematic method for accurately determining the TEG input resistance is presented. Next, through a case study on automotive TEGs, it is shown that compared to prior art, more than 11% of power losses in the interface circuitry that lies between the TEG and the electrical load can be saved by the proposed modeling technique. In addition, it is demonstrated that the traditional approach would have resulted in a deviation from the target regulated voltage by as much as 59%.", "title": "" }, { "docid": "cb85db604bf21751766daf3751dd73bd", "text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.", "title": "" }, { "docid": "c48d0c94d3e97661cc2c944cc4b61813", "text": "CIPO is the very “tip of the iceberg” of functional gastrointestinal disorders, being a rare and frequently misdiagnosed condition characterized by an overall poor outcome. Diagnosis should be based on clinical features, natural history and radiologic findings. There is no cure for CIPO and management strategies include a wide array of nutritional, pharmacologic, and surgical options which are directed to minimize malnutrition, promote gut motility and reduce complications of stasis (ie, bacterial overgrowth). Pain may become so severe to necessitate major analgesic drugs. Underlying causes of secondary CIPO should be thoroughly investigated and, if detected, treated accordingly. Surgery should be indicated only in a highly selected, well characterized subset of patients, while isolated intestinal or multivisceral transplantation is a rescue therapy only in those patients with intestinal failure unsuitable for or unable to continue with TPN/HPN. Future perspectives in CIPO will be directed toward an accurate genomic/proteomic phenotying of these rare, challenging patients. Unveiling causative mechanisms of neuro-ICC-muscular abnormalities will pave the way for targeted therapeutic options for patients with CIPO.", "title": "" }, { "docid": "f395e3d72341bd20e1a16b97259bad7d", "text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.", "title": "" }, { "docid": "30d0ff3258decd5766d121bf97ae06d4", "text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.", "title": "" }, { "docid": "5e6f9014a07e7b2bdfd255410a73b25f", "text": "Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation’s track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990–1999 and 2000–mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in as sk the OSDO business, such and services.", "title": "" }, { "docid": "3b7ac492add26938636ae694ebb14b65", "text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction", "title": "" }, { "docid": "f0c4c1a82eee97d19012421614ee5d5f", "text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.", "title": "" }, { "docid": "58d7e76a4b960e33fc7b541d04825dc9", "text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.", "title": "" }, { "docid": "f53dc3977a9e8c960e0232ef59c0e7fd", "text": "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.", "title": "" }, { "docid": "69f413d247e88022c3018b2dee1b53e2", "text": "Research and development (R&D) project selection is an important task for organizations with R&D project management. It is a complicated multi-stage decision-making process, which involves groups of decision makers. Current research on R&D project selection mainly focuses on mathematical decision models and their applications, but ignores the organizational aspect of the decision-making process. This paper proposes an organizational decision support system (ODSS) for R&D project selection. Object-oriented method is used to design the architecture of the ODSS. An organizational decision support system has also been developed and used to facilitate the selection of project proposals in the National Natural Science Foundation of China (NSFC). The proposed system supports the R&D project selection process at the organizational level. It provides useful information for decision-making tasks in the R&D project selection process. D 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9409922d01a00695745939b47e6446a0", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" } ]
scidocsrr
8820236a0f3281d41e9c0098bfb27062
Taxonomy Construction Using Syntactic Contextual Evidence
[ { "docid": "57457909ea5fbee78eccc36c02464942", "text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.", "title": "" }, { "docid": "074011796235a8ab0470ba0fe967918f", "text": "We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions:popularity and productivity. Intuitively, a candidate ispopular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies.", "title": "" } ]
[ { "docid": "df22aa6321c86b0aec44778c7293daca", "text": "BACKGROUND\nAtopic dermatitis (AD) is characterized by dry skin and a hyperactive immune response to allergens, 2 cardinal features that are caused in part by epidermal barrier defects. Tight junctions (TJs) reside immediately below the stratum corneum and regulate the selective permeability of the paracellular pathway.\n\n\nOBJECTIVE\nWe evaluated the expression/function of the TJ protein claudin-1 in epithelium from AD and nonatopic subjects and screened 2 American populations for single nucleotide polymorphisms in the claudin-1 gene (CLDN1).\n\n\nMETHODS\nExpression profiles of nonlesional epithelium from patients with extrinsic AD, nonatopic subjects, and patients with psoriasis were generated using Illumina's BeadChips. Dysregulated intercellular proteins were validated by means of tissue staining and quantitative PCR. Bioelectric properties of epithelium were measured in Ussing chambers. Functional relevance of claudin-1 was assessed by using a knockdown approach in primary human keratinocytes. Twenty-seven haplotype-tagging SNPs in CLDN1 were screened in 2 independent populations with AD.\n\n\nRESULTS\nWe observed strikingly reduced expression of the TJ proteins claudin-1 and claudin-23 only in patients with AD, which were validated at the mRNA and protein levels. Claudin-1 expression inversely correlated with T(H)2 biomarkers. We observed a remarkable impairment of the bioelectric barrier function in AD epidermis. In vitro we confirmed that silencing claudin-1 expression in human keratinocytes diminishes TJ function while enhancing keratinocyte proliferation. Finally, CLDN1 haplotype-tagging SNPs revealed associations with AD in 2 North American populations.\n\n\nCONCLUSION\nCollectively, these data suggest that an impairment in tight junctions contributes to the barrier dysfunction and immune dysregulation observed in AD subjects and that this may be mediated in part by reductions in claudin-1.", "title": "" }, { "docid": "617bb88fdb8b76a860c58fc887ab2bc4", "text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.", "title": "" }, { "docid": "0c4ca5a63c7001e6275b05da7771a7a6", "text": "We present a new data structure for the c-approximate near neighbor problem (ANN) in the Euclidean space. For n points in R, our algorithm achieves Oc(n + d log n) query time and Oc(n + d log n) space, where ρ ≤ 0.73/c + O(1/c) + oc(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality-sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ICS 2011). By known reductions we obtain a data structure for the Hamming space and l1 norm with ρ ≤ 0.73/c+O(1/c) + oc(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998). Thesis Supervisor: Piotr Indyk Title: Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "9acb65952ca0ceb489a97794b6f380ce", "text": "Conventional railway track, of the type seen throughout the majority of the UK rail network, is made up of rails that are fixed to sleepers (ties), which, in turn, are supported by ballast. The ballast comprises crushed, hard stone and its main purpose is to distribute loads from the sleepers as rail traffic passes along the track. Over time, the stones in the ballast deteriorate, leading the track to settle and the geometry of the rails to change. Changes in geometry must be addressed in order that the track remains in a safe condition. Track inspections are carried out by measurement trains, which use sensors to precisely measure the track geometry. Network operators aim to carry out maintenance before the track geometry degrades to such an extent that speed restrictions or line closures are required. However, despite the fact that it restores the track geometry, the maintenance also worsens the general condition of the ballast, meaning that the rate of track geometry deterioration tends to increase as the amount of maintenance performed to the ballast increases. This paper considers the degradation, inspection and maintenance of a single one eighth of a mile section of railway track. A Markov model of such a section is produced. Track degradation data from the UK rail network has been analysed to produce degradation distributions which are used to define transition rates within the Markov model. The model considers the changing deterioration rate of the track section following maintenance and is used to analyse the effects of changing the level of track geometry degradation at which maintenance is requested for the section. The results are also used to show the effects of unrevealed levels of degradation. A model such as the one presented can be used to form an integral part of an asset management strategy and maintenance decision making process for railway track.", "title": "" }, { "docid": "c6daad10814bafb3453b12cfac30b788", "text": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MSCOCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https: //github.com/kuanghuei/SCAN.", "title": "" }, { "docid": "8a8edb63c041a01cbb887cd526b97eb0", "text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.", "title": "" }, { "docid": "912c4601f8c6e31107b21233ee871a6b", "text": "The physiological mechanisms that control energy balance are reciprocally linked to those that control reproduction, and together, these mechanisms optimize reproductive success under fluctuating metabolic conditions. Thus, it is difficult to understand the physiology of energy balance without understanding its link to reproductive success. The metabolic sensory stimuli, hormonal mediators and modulators, and central neuropeptides that control reproduction also influence energy balance. In general, those that increase ingestive behavior inhibit reproductive processes, with a few exceptions. Reproductive processes, including the hypothalamic-pituitary-gonadal (HPG) system and the mechanisms that control sex behavior are most proximally sensitive to the availability of oxidizable metabolic fuels. The role of hormones, such as insulin and leptin, are not understood, but there are two possible ways they might control food intake and reproduction. They either mediate the effects of energy metabolism on reproduction or they modulate the availability of metabolic fuels in the brain or periphery. This review examines the neural pathways from fuel detectors to the central effector system emphasizing the following points: first, metabolic stimuli can directly influence the effector systems independently from the hormones that bind to these central effector systems. For example, in some cases, excess energy storage in adipose tissue causes deficits in the pool of oxidizable fuels available for the reproductive system. Thus, in such cases, reproduction is inhibited despite a high body fat content and high plasma concentrations of hormones that are thought to stimulate reproductive processes. The deficit in fuels creates a primary sensory stimulus that is inhibitory to the reproductive system, despite high concentrations of hormones, such as insulin and leptin. Second, hormones might influence the central effector systems [including gonadotropin-releasing hormone (GnRH) secretion and sex behavior] indirectly by modulating the metabolic stimulus. Third, the critical neural circuitry involves extrahypothalamic sites, such as the caudal brain stem, and projections from the brain stem to the forebrain. Catecholamines, neuropeptide Y (NPY) and corticotropin-releasing hormone (CRH) are probably involved. Fourth, the metabolic stimuli and chemical messengers affect the motivation to engage in ingestive and sex behaviors instead of, or in addition to, affecting the ability to perform these behaviors. Finally, it is important to study these metabolic events and chemical messengers in a wider variety of species under natural or seminatural circumstances.", "title": "" }, { "docid": "017055a324d781774f05e35d07eff8f6", "text": "We propose a lattice Boltzmann method to treat moving boundary problems for solid objects moving in a fluid. The method is based on the simple bounce-back boundary scheme and interpolations. The proposed method is tested in two flows past an impulsively started cylinder moving in a channel in two dimensions: (a) the flow past an impulsively started cylinder moving in a transient Couette flow; and (b) the flow past an impulsively started cylinder moving in a channel flow at rest. We obtain satisfactory results and also verify the Galilean invariance of the lattice Boltzmann method. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "115ebc84b27fbf2195dbf6a5b0eebac5", "text": "This paper presents an automatic system for fire detection in video sequences. There are several previous methods to detect fire, however, all except two use spectroscopy or particle sensors. The two that use visual information suffer from the inability to cope with a moving camera or a moving scene. One of these is not able to work on general data, such as movie sequences. The other is too simplistic and unrestrictive in determining what is considered fire; so that it can be used reliably only in aircraft dry bays. We propose a system that uses color and motion information computed from video sequences to locate fire. This is done by first using an approach that is based upon creating a Gaussian-smoothed color histogram to detect the fire-colored pixels, and then using a temporal variation of pixels to determine which of these pixels are actually fire pixels. Next, some spurious fire pixels are automatically removed using an erode operation, and some missing fire pixels are found using region growing method. Unlike the two previous vision-based methods for fire detection, our method is applicable to more areas because of its insensitivity to camera motion. Two specific applications not possible with previous algorithms are the recognition of fire in the presence of global camera motion or scene motion and the recognition of fire in movies for possible use in an automatic rating system. We show that our method works in a variety of conditions, and that it can automatically determine when it has insufficient information.", "title": "" }, { "docid": "30bc7923529eec5ac7d62f91de804f8e", "text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.", "title": "" }, { "docid": "52050687ccc2844863197f9cba11a3d2", "text": "Classical mechanics was first envisaged by Newton, formed into a powerful tool by Euler, and brought to perfection by Lagrange and Laplace. It has served as the paradigm of science ever since. Even the great revolutions of 19th century phys icsnamely, the FaradayMaxwell electro-magnetic theory and the kinetic t h e o r y w e r e viewed as further support for the complete adequacy of the mechanistic world view. The physicist at the end of the 19th century had a coherent conceptual scheme which, in principle at least, answered all his questions about the world. The only work left to be done was the computing of the next decimal. This consensus began to unravel at the beginning of the 20th century. The work of Planck, Einstein, and Bohr simply could not be made to fit. The series of ad hoc moves by Bohr, Eherenfest, et al., now called the old quantum theory, was viewed by all as, at best, a stopgap. In the period 1925-27 a new synthesis was formed by Heisenberg, Schr6dinger, Dirac and others. This new synthesis was so successful that even today, fifty years later, physicists still teach quantum mechanics as it was formulated by these men. Nevertheless, two foundational tasks remained: that of providing a rigorous mathematical formulation of the theory, and that of providing a systematic comparison with classical mechanics so that the full ramifications of the quantum revolution could be clearly revealed. These tasks are, of course, related, and a possible fringe benefit of the second task might be the pointing of the way 'beyond quantum theory'. These tasks were taken up by von Neumann as a consequence of a seminar on the foundations of quantum mechanics conducted by Hilbert in the fall of 1926. In papers published in 1927 and in his book, The Mathemat ical Foundations of Quantum Mechanics, von Neumann provided the first completely rigorous", "title": "" }, { "docid": "707947e404b363963d08a9b7d93c87fb", "text": "The Lexical Substitution task involves selecting and ranking lexical paraphrases for a target word in a given sentential context. We present PIC, a simple measure for estimating the appropriateness of substitutes in a given context. PIC outperforms another simple, comparable model proposed in recent work, especially when selecting substitutes from the entire vocabulary. Analysis shows that PIC improves over baselines by incorporating frequency biases into predictions.", "title": "" }, { "docid": "85bdac91c8c7d456a7e76ce5927cc994", "text": "Current CNN-based solutions to salient object detection (SOD) mainly rely on the optimization of cross-entropy loss (CELoss). Then the quality of detected saliency maps is often evaluated in terms of F-measure. In this paper, we investigate an interesting issue: can we consistently use the F-measure formulation in both training and evaluation for SOD? By reformulating the standard F-measure we propose the relaxed F-measure which is differentiable w.r.t the posterior and can be easily appended to the back of CNNs as the loss function. Compared to the conventional cross-entropy loss of which the gradients decrease dramatically in the saturated area, our loss function, named FLoss, holds considerable gradients even when the activation approaches the target. Consequently, the FLoss can continuously force the network to produce polarized activations. Comprehensive benchmarks on several popular datasets show that FLoss outperforms the stateof-the-arts with a considerable margin. More specifically, due to the polarized predictions, our method is able to obtain high quality saliency maps without carefully tuning the optimal threshold, showing significant advantages in real world applications.", "title": "" }, { "docid": "26d347d66524f1d57262e35041d3ca67", "text": "Many Network Representation Learning (NRL) methods have been proposed to learn vector representations for vertices in a network recently. In this paper, we summarize most existing NRL methods into a unified two-step framework, including proximity matrix construction and dimension reduction. We focus on the analysis of proximity matrix construction step and conclude that an NRL method can be improved by exploring higher order proximities when building the proximity matrix. We propose Network Embedding Update (NEU) algorithm which implicitly approximates higher order proximities with theoretical approximation bound and can be applied on any NRL methods to enhance their performances. We conduct experiments on multi-label classification and link prediction tasks. Experimental results show that NEU can make a consistent and significant improvement over a number of NRL methods with almost negligible running time on all three publicly available datasets. The source code of this paper can be obtained from https://github.com/thunlp/NEU.", "title": "" }, { "docid": "2de213f62e6b5fbf89d9b43a3ad78a34", "text": "To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.", "title": "" }, { "docid": "f36ef9dd6b78605683f67b382b9639ac", "text": "Stable clones of neural stem cells (NSCs) have been isolated from the human fetal telencephalon. These self-renewing clones give rise to all fundamental neural lineages in vitro. Following transplantation into germinal zones of the newborn mouse brain they participate in aspects of normal development, including migration along established migratory pathways to disseminated central nervous system regions, differentiation into multiple developmentally and regionally appropriate cell types, and nondisruptive interspersion with host progenitors and their progeny. These human NSCs can be genetically engineered and are capable of expressing foreign transgenes in vivo. Supporting their gene therapy potential, secretory products from NSCs can correct a prototypical genetic metabolic defect in neurons and glia in vitro. The human NSCs can also replace specific deficient neuronal populations. Cryopreservable human NSCs may be propagated by both epigenetic and genetic means that are comparably safe and effective. By analogy to rodent NSCs, these observations may allow the development of NSC transplantation for a range of disorders.", "title": "" }, { "docid": "63429f5eebc2434660b0073b802127c2", "text": "Body Area Networks are unique in that the large-scale mobility of users allows the network itself to travel across a diverse range of operating domains or even to enter new and unknown environments. This network mobility is unlike node mobility in that sensed changes in inter-network interference level may be used to identify opportunities for intelligent inter-networking, for example, by merging or splitting from other networks, thus providing an extra degree of freedom. This paper introduces the concept of context-aware bodynets for interactive environments using inter-network interference sensing. New ideas are explored at both the physical and link layers with an investigation based on a 'smart' office environment. A series of carefully controlled measurements of the mesh interconnectivity both within and between an ambulatory body area network and a stationary desk-based network were performed using 2.45 GHz nodes. Received signal strength and carrier to interference ratio time series for selected node to node links are presented. The results provide an insight into the potential interference between the mobile and static networks and highlight the possibility for automatic identification of network merging and splitting opportunities.", "title": "" }, { "docid": "7b54a56e4ad51210bc56bd768a6f4c22", "text": "Research on the predictive bias of cognitive tests has generally shown (a) no slope effects and (b) small intercept effects, typically favoring the minority group. Aguinis, Culpepper, and Pierce (2010) simulated data and demonstrated that statistical artifacts may have led to a lack of power to detect slope differences and an overestimate of the size of the intercept effect. In response to Aguinis et al.'s (2010) call for a revival of predictive bias research, we used data on over 475,000 students entering college between 2006 and 2008 to estimate slope and intercept differences in the college admissions context. Corrections for statistical artifacts were applied. Furthermore, plotting of regression lines supplemented traditional analyses of predictive bias to offer additional evidence of the form and extent to which predictive bias exists. Congruent with previous research on bias of cognitive tests, using SAT scores in conjunction with high school grade-point average to predict first-year grade-point average revealed minimal differential prediction (ΔR²intercept ranged from .004 to .032 and ΔR²slope ranged from .001 to .013 depending on the corrections applied and comparison groups examined). We found, on the basis of regression plots, that college grades were consistently overpredicted for Black and Hispanic students and underpredicted for female students.", "title": "" }, { "docid": "1edb5f3179ebfc33922e12a0c2eea294", "text": "PURPOSE OF REVIEW\nThis review discusses the rational development of guidelines for the management of neonatal sepsis in developing countries.\n\n\nRECENT FINDINGS\nDiagnosis of neonatal sepsis with high specificity remains challenging in developing countries. Aetiology data, particularly from rural, community-based studies, are very limited, but molecular tests to improve diagnostics are being tested in a community-based study in South Asia. Antibiotic susceptibility data are limited, but suggest reducing susceptibility to first-and second-line antibiotics in both hospital and community-acquired neonatal sepsis. Results of clinical trials in South Asia and sub-Saharan Africa assessing feasibility of simplified antibiotic regimens are awaited.\n\n\nSUMMARY\nEffective management of neonatal sepsis in developing countries is essential to reduce neonatal mortality and morbidity. Simplified antibiotic regimens are currently being examined in clinical trials, but reduced antimicrobial susceptibility threatens current empiric treatment strategies. Improved clinical and microbiological surveillance is essential, to inform current practice, treatment guidelines, and monitor implementation of policy changes.", "title": "" }, { "docid": "c235af1fbd499c1c3c10ea850d01bffd", "text": "Cloud computing, as a concept, promises cost savings to end-users by letting them outsource their non-critical business functions to a third party in pay-as-you-go style. However, to enable economic pay-as-you-go services, we need Cloud middleware that maximizes sharing and support near zero costs for unused applications. Multi-tenancy, which let multiple tenants (user) to share a single application instance securely, is a key enabler for building such a middleware. On the other hand, Business processes capture Business logic of organizations in an abstract and reusable manner, and hence play a key role in most organizations. This paper presents the design and architecture of a Multi-tenant Workflow engine while discussing in detail potential use cases of such architecture. Primary contributions of this paper are motivating workflow multi-tenancy, and the design and implementation of multi-tenant workflow engine that enables multiple tenants to run their workflows securely within the same workflow engine instance without modifications to the workflows.", "title": "" } ]
scidocsrr
cd400e4383dff77cd6958cea9159cf57
How to Build a CC System
[ { "docid": "178cf363aaef9b888e881bf67955d1aa", "text": "The computational creativity community (rightfully) takes a dim view of supposedly creative systems that operate by mere generation. However, what exactly this means has never been adequately defined, and therefore the idea of requiring systems to exceed this standard is problematic. Here, we revisit the question of mere generation and attempt to qualitatively identify what constitutes exceeding this threshold. This exercise leads to the conclusion that the question is likely no longer relevant for the field and that a failure to recognize this is likely detrimental to its future health.", "title": "" } ]
[ { "docid": "f169f42bcdbaf79e7efa9b1066b86523", "text": "Logic and Philosophy of Science Research Group, Hokkaido University, Japan Jan 7, 2015 Abstract In this paper we provide an analysis and overview of some notable definitions, works and thoughts concerning discrete physics (digital philosophy) that mainly suggest a finite and discrete characteristic for the physical world, as well as, of the cellular automaton, which could serve as the basis of a (or the only) perfect mathematical deterministic model for the physical reality.", "title": "" }, { "docid": "a9de29e1d8062b4950e5ab3af6bea8df", "text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.", "title": "" }, { "docid": "9077dede1c2c4bc4b696a93e01c84f52", "text": "Reliable continuous core temperature measurement is of major importance for monitoring patients. The zero heat flux method (ZHF) can potentially fulfil the requirements of non-invasiveness, reliability and short delay time that current measurement methods lack. The purpose of this study was to determine the performance of a new ZHF device on the forehead regarding these issues. Seven healthy subjects performed a protocol of 10 min rest, 30 min submaximal exercise (average temperature increase about 1.5 °C) and 10 min passive recovery in ambient conditions of 35 °C and 50% relative humidity. ZHF temperature (T(zhf)) was compared to oesophageal (T(es)) and rectal (T(re)) temperature. ΔT(zhf)-T(es) had an average bias ± standard deviation of 0.17 ± 0.19 °C in rest, -0.05 ± 0.18 °C during exercise and -0.01 ± 0.20 °C during recovery, the latter two being not significant. The 95% limits of agreement ranged from -0.40 to 0.40 °C and T(zhf) had hardly any delay compared to T(es). T(re) showed a substantial delay and deviation from T(es) when core temperature changed rapidly. Results indicate that the studied ZHF sensor tracks T(es) very well in hot and stable ambient conditions and may be a promising alternative for reliable non-invasive continuous core temperature measurement in hospital.", "title": "" }, { "docid": "d292d1334594bec8531e6011fabaafd2", "text": "Insight into the growth (or shrinkage) of “knowledge communities” of authors that build on each other's work can be gained by studying the evolution over time of clusters of documents. We cluster documents based on the documents they cite in common using the Streemer clustering method, which finds cohesive foreground clusters (the knowledge communities) embedded in a diffuse background. We build predictive models with features based on the citation structure, the vocabulary of the papers, and the affiliations and prestige of the authors and use these models to study the drivers of community growth and the predictors of how widely a paper will be cited. We find that scientific knowledge communities tend to grow more rapidly if their publications build on diverse information and use narrow vocabulary and that papers that lie on the periphery of a community have the highest impact, while those not in any community have the lowest impact.", "title": "" }, { "docid": "58fc801888515e773a174e50e05f69fa", "text": "Anopheles mosquitoes, sp is the main vector of malaria disease that is widespread in many parts of the world including in Papua Province. There are four speciesof Anopheles mosquitoes, sp, in Papua namely: An.farauti, An.koliensis, An. subpictus, and An.punctulatus. Larviciding synthetic cause resistance. This study aims to analyze the potential of papaya leaf and seeds extracts (Carica papaya) as larvicides against the mosquitoes Anopheles sp. The experiment was conducted at the Laboratory of Health Research and Development in Jayapura Papua province. The method used is an experimental post only control group design. Sampling was done randomly on the larvae of Anopheles sp of breeding places in Kampung Kehiran Jayapura Sentani District, 1,500 larvae. Analysis of data using statistical analysis to test the log probit mortality regression dosage, Kruskall Wallis and Mann Whitney. The results showed that papaya leaf extract effective in killing larvae of Anopheles sp, value Lethal Concentration (LC50) were 422.311 ppm, 1399.577 ppm LC90, Lethal Time (LT50) 13.579 hours, LT90 23.478 hours. Papaya seed extract is effective in killing mosquito larvae Anopheles sp, with 21.983 ppm LC50, LC90 ppm 137.862, 13.269 hours LT50, LT90 26.885 hours. Papaya seed extract is more effective in killing larvae of Anopheles sp. The mixture of papaya leaf extract and seeds are effective in killing mosquito larvae Anopheles sp, indicated by the percentage of larval mortality, the observation hours to 12, the highest larval mortality in comparison 0,05:0,1 extract, 52%, ratio 0.1 : 0.1 by 48 %, on a 24 hour observation, larval mortality in both groups reached 100 %.", "title": "" }, { "docid": "4ef797ee3961528ec3bed66b2ddac452", "text": "WiFi offloading is envisioned as a promising solution to the mobile data explosion problem in cellular networks. WiFi offloading for moving vehicles, however, poses unique characteristics and challenges, due to high mobility, fluctuating mobile channels, etc. In this paper, we focus on the problem of WiFi offloading in vehicular communication environments. Specifically, we discuss the challenges and identify the research issues related to drive-thru Internet access and effectiveness of vehicular WiFi offloading. Moreover, we review the state-of-the-art offloading solutions, in which advanced vehicular communications can be employed. We also shed some lights on the path for future research on this topic.", "title": "" }, { "docid": "100c8fbe79112e2f7e12e85d7a1335f8", "text": "Staging and response criteria were initially developed for Hodgkin lymphoma (HL) over 60 years ago, but not until 1999 were response criteria published for non-HL (NHL). Revisions to these criteria for both NHL and HL were published in 2007 by an international working group, incorporating PET for response assessment, and were widely adopted. After years of experience with these criteria, a workshop including representatives of most major international lymphoma cooperative groups and cancer centers was held at the 11(th) International Conference on Malignant Lymphoma (ICML) in June, 2011 to determine what changes were needed. An Imaging Task Force was created to update the relevance of existing imaging for staging, reassess the role of interim PET-CT, standardize PET-CT reporting, and to evaluate the potential prognostic value of quantitative analyses using PET and CT. A clinical task force was charged with assessing the potential of PET-CT to modify initial staging. A subsequent workshop was help at ICML-12, June 2013. Conclusions included: PET-CT should now be used to stage FDG-avid lymphomas; for others, CT will define stage. Whereas Ann Arbor classification will still be used for disease localization, patients should be treated as limited disease [I (E), II (E)], or extensive disease [III-IV (E)], directed by prognostic and risk factors. Since symptom designation A and B are frequently neither recorded nor accurate, and are not prognostic in most widely used prognostic indices for HL or the various types of NHL, these designations need only be applied to the limited clinical situations where they impact treatment decisions (e.g., stage II HL). PET-CT can replace the bone marrow biopsy (BMBx) for HL. A positive PET of bone or bone marrow is adequate to designate advanced stage in DLBCL. However, BMBx can be considered in DLBCL with no PET evidence of BM involvement, if identification of discordant histology is relevant for patient management, or if the results would alter treatment. BMBx remains recommended for staging of other histologies, primarily if it will impact therapy. PET-CT will be used to assess response in FDG-avid histologies using the 5-point scale, and included in new PET-based response criteria, but CT should be used in non-avid histologies. The definition of PD can be based on a single node, but must consider the potential for flare reactions seen early in treatment with newer targeted agents which can mimic disease progression. Routine surveillance scans are strongly discouraged, and the number of scans should be minimized in practice and in clinical trials, when not a direct study question. Hopefully, these recommendations will improve the conduct of clinical trials and patient management.", "title": "" }, { "docid": "4bcc299aaaea50bfbf11960b66d6d5d3", "text": "The multigram model assumes that language can be described as the output of a memoryless source that emits variable-length sequences of words. The estimation of the model parameters can be formulated as a Maximum Likelihood estimation problem from incomplete data. We show that estimates of the model parameters can be computed through an iterative Expectation-Maximization algorithm and we describe a forward-backward procedure for its implementation. We report the results of a systematical evaluation of multi-grams for language modeling on the ATIS database. The objective performance measure is the test set perplexity. Our results show that multigrams outperform conventional n-grams for this task.", "title": "" }, { "docid": "3cfdf87f53d4340287fa92194afe355e", "text": "With the rise of e-commerce, people are accustomed to writing their reviews after receiving the goods. These comments are so important that a bad review can have a direct impact on others buying. Besides, the abundant information within user reviews is very useful for extracting user preferences and item properties. In this paper, we investigate the approach to effectively utilize review information for recommender systems. The proposed model is named LSTM-Topic matrix factorization (LTMF) which integrates both LSTM and Topic Modeling for review understanding. In the experiments on popular review dataset Amazon , our LTMF model outperforms previous proposed HFT model and ConvMF model in rating prediction. Furthermore, LTMF shows the better ability on making topic clustering than traditional topic model based method, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews.", "title": "" }, { "docid": "a583b48a8eb40a9e88a5137211f15bce", "text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.", "title": "" }, { "docid": "32a97a3d9f010c7cdd542c34f02afb46", "text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we delve into the logical design of ETL scenarios and provide a generic and customizable framework in order to support the DW designer in his task. First, we present a metamodel particularly customized for the definition of ETL activities. We follow a workflow-like approach, where the output of a certain activity can either be stored persistently or passed to a subsequent activity. Also, we employ a declarative database programming language, LDL, to define the semantics of each activity. The metamodel is generic enough to capture any possible ETL activity. Nevertheless, in the pursuit of higher reusability and flexibility, we specialize the set of our generic metamodel constructs with a palette of frequently-used ETL activities, which we call templates. Moreover, in order to achieve a uniform extensibility mechanism for this library of built-ins, we have to deal with specific language issues. Therefore, we also discuss the mechanics of template instantiation to concrete activities. The design concepts that we introduce have been implemented in a tool, ARKTOS II, which is also presented.", "title": "" }, { "docid": "bacd81a1074a877e0c943a6755290d34", "text": "This thesis addresses the problem of scheduling multiple, concurrent, adaptively parallel jobs on a multiprogrammed shared-memory multiprocessor. Adaptively parallel jobs are jobs for which the number of processors that can be used without waste varies during execution. We focus on the specific case of parallel jobs that are scheduled using a randomized work-stealing algorithm, as is used in the Cilk multithreaded language. We begin by developing a theoretical model for two-level scheduling systems, or those in which the operating system allocates processors to jobs, and the jobs schedule their threads on the processors. To analyze the performance of a job scheduling algorithm, we model the operating system as an adversary. We show that a greedy scheduler achieves an execution time that is within a factor of 2 of optimal under these conditions. Guided by our model, we present a randomized work-stealing algorithm for adaptively parallel jobs, algorithm WSAP, which takes a unique approach to estimating the processor desire of a job. We show that attempts to directly measure a job’s instantaneous parallelism are inherently misleading. We also describe a dynamic processor-allocation algorithm, algorithm DP, that allocates processors to jobs in a fair and efficient way. Using these two algorithms, we present the design and implementation of Cilk-AP, a two-level scheduling system for adaptively parallel workstealing jobs. Cilk-AP is implemented by extending the runtime system of Cilk. We tested the Cilk-AP system on a shared-memory symmetric multiprocessor (SMP) with 16 processors. Our experiments show that, relative to the original Cilk system, Cilk-AP incurs negligible overhead and provides up to 37% improvement in throughput and 30% improvement in response time in typical multiprogramming scenarios. This thesis represents joint work with Charles Leiserson and Kunal Agrawal of the Supercomputing Technologies Group at MIT’s Computer Science and Artificial Intelligence Laboratory. Thesis Supervisor: Charles E. Leiserson Title: Professor", "title": "" }, { "docid": "c4b4c647e13d0300845bed2b85c13a3c", "text": "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.", "title": "" }, { "docid": "5769af5ff99595032653dbda724f5a9d", "text": "JULY 2005, GSA TODAY ABSTRACT The subduction factory processes raw materials such as oceanic sediments and oceanic crust and manufactures magmas and continental crust as products. Aqueous fluids, which are extracted from oceanic raw materials via dehydration reactions during subduction, dissolve particular elements and overprint such elements onto the mantle wedge to generate chemically distinct arc basalt magmas. The production of calc-alkalic andesites typifies magmatism in subduction zones. One of the principal mechanisms of modern-day, calc-alkalic andesite production is thought to be mixing of two endmember magmas, a mantle-derived basaltic magma and an arc crust-derived felsic magma. This process may also have contributed greatly to continental crust formation, as the bulk continental crust possesses compositions similar to calc-alkalic andesites. If so, then the mafic melting residue after extraction of felsic melts should be removed and delaminated from the initial basaltic arc crust in order to form “andesitic” crust compositions. The waste materials from the factory, such as chemically modified oceanic materials and delaminated mafic lower crust materials, are transported down to the deep mantle and recycled as mantle plumes. The subduction factory has played a central role in the evolution of the solid Earth through creating continental crust and deep mantle geochemical reservoirs.", "title": "" }, { "docid": "79d22f397503ea852549b9b55dbb6ac6", "text": "This study examines the effects of body shape (women’s waist-to-hip ratio and men’s waist-to-shoulder ratio) on desirability of a potential romantic partner. In judging desirability, we expected male participants to place more emphasis on female body shape, whereas females would focus more on personality characteristics. Further, we expected that relationship type would moderate the extent to which physical characteristics were valued over personality. Specifically, physical characteristics were expected to be most valued in short-term sexual encounters when compared with long-term relationships. Two hundred and thirty-nine participants (134 females, 105 males; 86% Caucasian) rated the desirability of an opposite-sex target for a date, a one-time sexual encounter, and a serious relationship. All key hypotheses were supported by the data.", "title": "" }, { "docid": "f1f08c43fdf29222a61f343390291000", "text": "This paper describes the way of Market Basket Analysis implementation to Six Sigma methodology. Data Mining methods provide a lot of opportunities in the market sector. Basket Market Analysis is one of them. Six Sigma methodology uses several statistical methods. With implementation of Market Basket Analysis (as a part of Data Mining) to Six Sigma (to one of its phase), we can improve the results and change the Sigma performance level of the process. In our research we used GRI (General Rule Induction) algorithm to produce association rules between products in the market basket. These associations show a variety between the products. To show the dependence between the products we used a Web plot. The last algorithm in analysis was C5.0. This algorithm was used to build rule-based profiles.", "title": "" }, { "docid": "804b320c6f5b07f7f4d7c5be29c572e9", "text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.", "title": "" }, { "docid": "5a3ffb6a6c15420569ea3c2b064b1c33", "text": "In this paper, we propose a novel tensor graph convolutional neural network (TGCNN) to conduct convolution on factorizable graphs, for which here two types of problems are focused, one is sequential dynamic graphs and the other is cross-attribute graphs. Especially, we propose a graph preserving layer to memorize salient nodes of those factorized subgraphs, i.e. cross graph convolution and graph pooling. For cross graph convolution, a parameterized Kronecker sum operation is proposed to generate a conjunctive adjacency matrix characterizing the relationship between every pair of nodes across two subgraphs. Taking this operation, then general graph convolution may be efficiently performed followed by the composition of small matrices, which thus reduces high memory and computational burden. Encapsuling sequence graphs into a recursive learning, the dynamics of graphs can be efficiently encoded as well as the spatial layout of graphs. To validate the proposed TGCNN, experiments are conducted on skeleton action datasets as well as matrix completion dataset. The experiment results demonstrate that our method can achieve more competitive performance with the state-of-the-art methods.", "title": "" }, { "docid": "e1b72aba65e515e7d85cd1703bded445", "text": "BACKGROUND AND OBJECTIVES\nTo assess the influence of risk factors on the rates and kinetics of peripheral vein phlebitis (PVP) development and its theoretical influence in absolute PVP reduction after catheter replacement.\n\n\nMETHODS\nAll peripheral short intravenous catheters inserted during one month were included (1201 catheters and 967 patients). PVP risk factors were assessed by a Cox proportional hazard model. Cumulative probability, conditional failure of PVP and theoretical estimation of the benefit from replacement at different intervals were performed.\n\n\nRESULTS\nFemale gender, catheter insertion at the emergency or medical-surgical wards, forearm site, amoxicillin-clavulamate or aminoglycosides were independent predictors of PVP with hazard ratios (95 confidence interval) of 1.46 (1.09-2.15), 1.94 (1.01-3.73), 2.51 (1.29-4.88), 1.93 (1.20-3.01), 2.15 (1.45-3.20) and 2.10 (1.01-4.63), respectively. Maximum phlebitis incidence was reached sooner in patients with ≥2 risk factors (days 3-4) than in those with <2 (days 4-5). Conditional failure increased from 0.08 phlebitis/one catheter-day for devices with ≤1 risk factors to 0.26 for those with ≥3. The greatest benefit of routine catheter exchange was obtained by replacement every 60h. However, this benefit differed according to the number of risk factors: 24.8% reduction with ≥3, 13.1% with 2, and 9.2% with ≤1.\n\n\nCONCLUSIONS\nPVP dynamics is highly influenced by identifiable risk factors which may be used to refine the strategy of catheter management. Routine replacement every 72h seems to be strictly necessary only in high-risk catheters.", "title": "" }, { "docid": "b475a47a9c8e8aca82c236250bbbfc33", "text": "OBJECTIVE\nTo issue a recommendation on the types and amounts of physical activity needed to improve and maintain health in older adults.\n\n\nPARTICIPANTS\nA panel of scientists with expertise in public health, behavioral science, epidemiology, exercise science, medicine, and gerontology.\n\n\nEVIDENCE\nThe expert panel reviewed existing consensus statements and relevant evidence from primary research articles and reviews of the literature.\n\n\nPROCESS\nAfter drafting a recommendation for the older adult population and reviewing drafts of the Updated Recommendation from the American College of Sports Medicine (ACSM) and the American Heart Association (AHA) for Adults, the panel issued a final recommendation on physical activity for older adults.\n\n\nSUMMARY\nThe recommendation for older adults is similar to the updated ACSM/AHA recommendation for adults, but has several important differences including: the recommended intensity of aerobic activity takes into account the older adult's aerobic fitness; activities that maintain or increase flexibility are recommended; and balance exercises are recommended for older adults at risk of falls. In addition, older adults should have an activity plan for achieving recommended physical activity that integrates preventive and therapeutic recommendations. The promotion of physical activity in older adults should emphasize moderate-intensity aerobic activity, muscle-strengthening activity, reducing sedentary behavior, and risk management.", "title": "" } ]
scidocsrr
785b2bddce513baa7977fa400de3e3e9
Hedging Deep Features for Visual Tracking.
[ { "docid": "e14d1f7f7e4f7eaf0795711fb6260264", "text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.", "title": "" }, { "docid": "001104ca832b10553b28bbd713e6cbd5", "text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "title": "" }, { "docid": "d349cf385434027b4532080819d5745f", "text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "title": "" } ]
[ { "docid": "3e93f1c35e42fa7abc245677f5be16ba", "text": "In this paper, an unequal 1:N Wilkinson power divider with variable power dividing ratio is proposed. The proposed unequal power divider is composed of the conventional Wilkinson divider structure, rectangular-shaped defected ground structure (DGS), island in DGS, and varactor diodes of which capacitance is adjustable according to bias voltage. The high impedance value of microstrip line having DGS is going up and down by adjusting the bias voltage for varactor diodes. Output power dividing ratio (N) is adjusted from 2.59 to 10.4 for the unequal power divider with 2 diodes.", "title": "" }, { "docid": "be398b849ba0caf2e714ea9cc8468d78", "text": "Gadolinium based contrast agents (GBCAs) play an important role in the diagnostic evaluation of many patients. The safety of these agents has been once again questioned after gadolinium deposits were observed and measured in brain and bone of patients with normal renal function. This retention of gadolinium in the human body has been termed \"gadolinium storage condition\". The long-term and cumulative effects of retained gadolinium in the brain and elsewhere are not as yet understood. Recently, patients who report that they suffer from chronic symptoms secondary to gadolinium exposure and retention created gadolinium-toxicity on-line support groups. Their self-reported symptoms have recently been published. Bone and joint complaints, and skin changes were two of the most common complaints. This condition has been termed \"gadolinium deposition disease\". In this review we will address gadolinium toxicity disorders, from acute adverse reactions to GBCAs to gadolinium deposition disease, with special emphasis on the latter, as it is the most recently described and least known.", "title": "" }, { "docid": "426a25d6536a3a388313aadbdb66bbe7", "text": "In this review, we present the recent developments and future prospects of improving nitrogen use efficiency (NUE) in crops using various complementary approaches. These include conventional breeding and molecular genetics, in addition to alternative farming techniques based on no-till continuous cover cropping cultures and/or organic nitrogen (N) nutrition. Whatever the mode of N fertilization, an increased knowledge of the mechanisms controlling plant N economy is essential for improving NUE and for reducing excessive input of fertilizers, while maintaining an acceptable yield and sufficient profit margin for the farmers. Using plants grown under agronomic conditions, with different tillage conditions, in pure or associated cultures, at low and high N mineral fertilizer input, or using organic fertilization, it is now possible to develop further whole plant agronomic and physiological studies. These can be combined with gene, protein and metabolite profiling to build up a comprehensive picture depicting the different steps of N uptake, assimilation and recycling to produce either biomass in vegetative organs or proteins in storage organs. We provide a critical overview as to how our understanding of the agro-ecophysiological, physiological and molecular controls of N assimilation in crops, under varying environmental conditions, has been improved. We OPEN ACCESS Sustainability 2011, 3 1453 have used combined approaches, based on agronomic studies, whole plant physiology, quantitative genetics, forward and reverse genetics and the emerging systems biology. Long-term sustainability may require a gradual transition from synthetic N inputs to legume-based crop rotation, including continuous cover cropping systems, where these may be possible in certain areas of the world, depending on climatic conditions. Current knowledge and prospects for future agronomic development and application for breeding crops adapted to lower mineral fertilizer input and to alternative farming techniques are explored, whilst taking into account the constraints of both the current world economic situation and the environment.", "title": "" }, { "docid": "bba21c774160b38eb64bf06b2e8b9ab7", "text": "Open data marketplaces have emerged as a mode of addressing open data adoption barriers. However, knowledge of how such marketplaces affect digital service innovation in open data ecosystems is limited. This paper explores their value proposition for open data users based on an exploratory case study. Five prominent perceived values are identified: lower task complexity, higher access to knowledge, increased possibilities to influence, lower risk and higher visibility. The impact on open data adoption barriers is analyzed and the consequences for ecosystem sustainability is discussed. The paper concludes that open data marketplaces can lower the threshold of using open data by providing better access to open data and associated support services, and by increasing knowledge transfer within the ecosystem.", "title": "" }, { "docid": "69548f662a286c0b3aca5374f36ce2c7", "text": "A hallmark of glaucomatous optic nerve damage is retinal ganglion cell (RGC) death. RGCs, like other central nervous system neurons, have a limited capacity to survive or regenerate an axon after injury. Strategies that prevent or slow down RGC degeneration, in combination with intraocular pressure management, may be beneficial to preserve vision in glaucoma. Recent progress in neurobiological research has led to a better understanding of the molecular pathways that regulate the survival of injured RGCs. Here we discuss a variety of experimental strategies including intraocular delivery of neuroprotective molecules, viral-mediated gene transfer, cell implants and stem cell therapies, which share the ultimate goal of promoting RGC survival after optic nerve damage. The challenge now is to assess how this wealth of knowledge can be translated into viable therapies for the treatment of glaucoma and other optic neuropathies.", "title": "" }, { "docid": "86502e1c68f309bb7676d5b1e9013172", "text": "In this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment.", "title": "" }, { "docid": "0831efef8bd60441b0aa2b0a917d04c2", "text": "Light-weight antenna arrays require utilizing the same antenna aperture to provide multiple functions (e.g., communications and radar) in separate frequency bands. In this paper, we present a novel antenna element design for a dual-band array, comprising interleaved printed dipoles spaced to avoid grating lobes in each band. The folded dipoles are designed to be resonant at octave-separated frequency bands (1 and 2 GHz), and inkjet-printed on photographic paper. Each dipole is gap-fed by voltage induced electromagnetically from a microstrip line on the other side of the substrate. This nested element configuration shows excellent corroboration between simulated and measured data, with 10-dB return loss bandwidth of at least 5% for each band and interchannel isolation better than 15 dB. The measured element gain is 5.3 to 7 dBi in the two bands, with cross-polarization less than -25 dBi. A large array containing 39 printed dipoles has been fabricated on paper, with each dipole individually fed to facilitate independent beam control. Measurements on the array reveal broadside gain of 12 to 17 dBi in each band with low cross-polarization.", "title": "" }, { "docid": "3a00a29587af4f7c5ce974a8e6970413", "text": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning.", "title": "" }, { "docid": "4097fe8240f8399de8c0f7f6bdcbc72f", "text": "Feature extraction of EEG signals is core issues on EEG based brain mapping analysis. The classification of EEG signals has been performed using features extracted from EEG signals. Many features have proved to be unique enough to use in all brain related medical application. EEG signals can be classified using a set of features like Autoregression, Energy Spectrum Density, Energy Entropy, and Linear Complexity. However, different features show different discriminative power for different subjects or different trials. In this research, two-features are used to improve the performance of EEG signals. Neural Network based techniques are applied to feature extraction of EEG signal. This paper discuss on extracting features based on Average method and Max & Min method of the data set. The Extracted Features are classified using Neural Network Temporal Pattern Recognition Technique. The two methods are compared and performance is analyzed based on the results obtained from the Neural Network classifier.", "title": "" }, { "docid": "049a7164a973fb515ed033ba216ec344", "text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.", "title": "" }, { "docid": "9a9bc57a279c4b88279bb1078e1e8a45", "text": "We study the problem of visualizing large-scale and highdimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-ofthe-art methods such as the t-SNE from scaling to largescale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to tSNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of highdimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets.", "title": "" }, { "docid": "64306a76b61bbc754e124da7f61a4fbe", "text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.", "title": "" }, { "docid": "8a679c93185332398c5261ddcfe81e84", "text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.", "title": "" }, { "docid": "41df403d437a17cb65915b755060ef8a", "text": "User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom, non-universality of the biometric trait and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not prove to be effective because of these inherent problems. Multibiometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems help achieve an increase in performance that may not be possible using a single biometric indicator. Further, multibiometric systems provide anti-spoofing measures by making it difficult for an intruder to spoof multiple biometric traits simultaneously. However, an effective fusion scheme is necessary to combine the information presented by multiple domain experts. This paper addresses the problem of information fusion in biometric verification systems by combining information at the matching score level. Experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are presented.", "title": "" }, { "docid": "08eac8e69ef59d9149f071472fb55670", "text": "This paper describes the issues and tradeoffs in the design and monolithic implementation of direct-conversion receivers and proposes circuit techniques that can alleviate the drawbacks of this architecture. Following a brief study of heterodyne and image-reject topologies, the direct-conversion architecture is introduced and effects such as dc offset, I=Q mismatch, even-order distortion, flicker noise, and oscillator leakage are analyzed. Related design techniques for amplification and mixing, quadrature phase calibration, and baseband processing are also described.", "title": "" }, { "docid": "db622838ba5f6c76f66125cf76c47b40", "text": "In recent years, the study of lightweight symmetric ciphers has gained interest due to the increasing demand for security services in constrained computing environments, such as in the Internet of Things. However, when there are several algorithms to choose from and different implementation criteria and conditions, it becomes hard to select the most adequate security primitive for a specific application. This paper discusses the hardware implementations of Present, a standardized lightweight cipher called to overcome part of the security issues in extremely constrained environments. The most representative realizations of this cipher are reviewed and two novel designs are presented. Using the same implementation conditions, the two new proposals and three state-of-the-art designs are evaluated and compared, using area, performance, energy, and efficiency as metrics. From this wide experimental evaluation, to the best of our knowledge, new records are obtained in terms of implementation size and energy consumption. In particular, our designs result to be adequate in regards to energy-per-bit and throughput-per-slice.", "title": "" }, { "docid": "d4cd6414a9edbd6f07b4a0b5f298e2ba", "text": "Measuring Semantic Textual Similarity (STS), between words/ terms, sentences, paragraph and document plays an important role in computer science and computational linguistic. It also has many applications over several fields such as Biomedical Informatics and Geoinformation. In this paper, we present a survey on different methods of textual similarity and we also reported about the availability of different software and tools those are useful for STS. In natural language processing (NLP), STS is a important component for many tasks such as document summarization, word sense disambiguation, short answer grading, information retrieval and extraction. We split out the measures for semantic similarity into three broad categories such as (i) Topological/Knowledge-based (ii) Statistical/ Corpus Based (iii) String based. More emphasis is given to the methods related to the WordNet taxonomy. Because topological methods, plays an important role to understand intended meaning of an ambiguous word, which is very difficult to process computationally. We also propose a new method for measuring semantic similarity between sentences. This proposed method, uses the advantages of taxonomy methods and merge these information to a language model. It considers the WordNet synsets for lexical relationships between nodes/words and a uni-gram language model is implemented over a large corpus to assign the information content value between the two nodes of different classes.", "title": "" }, { "docid": "603c82380d4896b324f4511c301972e5", "text": "Pseudolymphomatous folliculitis (PLF), which clinically mimicks cutaneous lymphoma, is a rare manifestation of cutaneous pseudolymphoma and cutaneous lymphoid hyperplasia. Here, we report on a 45-year-old Japanese woman with PLF. Dermoscopy findings revealed prominent arborizing vessels with small perifollicular and follicular yellowish spots and follicular red dots. A biopsy specimen also revealed dense lymphocytes, especially CD1a+ cells, infiltrated around the hair follicles. Without any additional treatment, the patient's nodule rapidly decreased. The presented case suggests that typical dermoscopy findings could be a possible supportive tool for the diagnosis of PLF.", "title": "" }, { "docid": "edf548598375ea1e36abd57dd3bad9c7", "text": "processes associated with social identity. Group identification, as self-categorization, constructs an intragroup prototypicality gradient that invests the most prototypical member with the appearance of having influence; the appearance arises because members cognitively and behaviorally conform to the prototype. The appearance of influence becomes a reality through depersonalized social attraction processes that makefollowers agree and comply with the leader's ideas and suggestions. Consensual social attraction also imbues the leader with apparent status and creates a status-based structural differentiation within the group into leader(s) and followers, which has characteristics ofunequal status intergroup relations. In addition, afundamental attribution process constructs a charismatic leadership personality for the leader, which further empowers the leader and sharpens the leader-follower status differential. Empirical supportfor the theory is reviewed and a range of implications discussed, including intergroup dimensions, uncertainty reduction and extremism, power, and pitfalls ofprototype-based leadership.", "title": "" }, { "docid": "85b95ad66c0492661455281177004b9e", "text": "Although relatively small in size and power output, automotive accessory motors play a vital role in improving such critical vehicle characteristics as drivability, comfort, and, most importantly, fuel economy. This paper describes a design method and experimental verification of a novel technique for torque ripple reduction in stator claw-pole permanent-magnet (PM) machines, which are a promising technology prospect for automotive accessory motors.", "title": "" } ]
scidocsrr
51883090b3992ff102603f118f991367
Crowd Map: Accurate Reconstruction of Indoor Floor Plans from Crowdsourced Sensor-Rich Videos
[ { "docid": "f085832faf1a2921eedd3d00e8e592db", "text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.", "title": "" }, { "docid": "9ad145cd939284ed77919b73452236c0", "text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.", "title": "" } ]
[ { "docid": "bfc349d95143237cc1cf55f77cb2044f", "text": "Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.", "title": "" }, { "docid": "fe6f81141e58bf5cf13bec80e033e197", "text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.", "title": "" }, { "docid": "48e26039d9b2e4ed3cfdbc0d3ba3f1d0", "text": "This paper presents a trajectory generator and an active compliance control scheme, unified in a framework to synthesize dynamic, feasible and compliant trot-walking locomotion cycles for a stiff-by-nature hydraulically actuated quadruped robot. At the outset, a CoP-based trajectory generator that is constructed using an analytical solution is implemented to obtain feasible and dynamically balanced motion references in a systematic manner. Initial conditions are uniquely determined for symmetrical motion patterns, enforcing that trajectories are seamlessly connected both in position, velocity and acceleration levels, regardless of the given support phase. The active compliance controller, used simultaneously, is responsible for sufficient joint position/force regulation. An admittance block is utilized to compute joint displacements that correspond to joint force errors. In addition to position feedback, these joint displacements are inserted to the position control loop as a secondary feedback term. In doing so, active compliance control is achieved, while the position/force trade-off is modulated via the virtual admittance parameters. Various trot-walking experiments are conducted with the proposed framework using HyQ, a ~ 75kg hydraulically actuated quadruped robot. We present results of repetitive, continuous, and dynamically equilibrated trot-walking locomotion cycles, both on level surface and uneven surface walking experiments.", "title": "" }, { "docid": "7c291acaf26a61dc5155af21d12c2aaf", "text": "Recently, deep learning and deep neural networks have attracted considerable attention and emerged as one predominant field of research in the artificial intelligence community. The developed techniques have also gained widespread use in various domains with good success, such as automatic speech recognition, information retrieval and text classification, etc. Among them, long short-term memory (LSTM) networks are well suited to such tasks, which can capture long-range dependencies among words efficiently, meanwhile alleviating the gradient vanishing or exploding problem during training effectively. Following this line of research, in this paper we explore a novel use of a Siamese LSTM based method to learn more accurate document representation for text categorization. Such a network architecture takes a pair of documents with variable lengths as the input and utilizes pairwise learning to generate distributed representations of documents that can more precisely render the semantic distance between any pair of documents. In doing so, documents associated with the same semantic or topic label could be mapped to similar representations having a relatively higher semantic similarity. Experiments conducted on two benchmark text categorization tasks, viz. IMDB and 20Newsgroups, show that using a three-layer deep neural network based classifier that takes a document representation learned from the Siamese LSTM sub-networks as the input can achieve competitive performance in relation to several state-of-the-art methods.", "title": "" }, { "docid": "c2f338aef785f0d6fee503bf0501a558", "text": "Recognizing 3-D objects in cluttered scenes is a challenging task. Common approaches find potential feature correspondences between a scene and candidate models by matching sampled local shape descriptors and select a few correspondences with the highest descriptor similarity to identify models that appear in the scene. However, real scans contain various nuisances, such as noise, occlusion, and featureless object regions. This makes selected correspondences have a certain portion of false positives, requiring adopting the time-consuming model verification many times to ensure accurate recognition. This paper proposes a 3-D object recognition approach with three key components. First, we construct a Signature of Geometric Centroids descriptor that is descriptive and robust, and apply it to find high-quality potential feature correspondences. Second, we measure geometric compatibility between a pair of potential correspondences based on isometry and three angle-preserving components. Third, we perform effective correspondence selection by using both descriptor similarity and compatibility with an auxiliary set of “less” potential correspondences. Experiments on publicly available data sets demonstrate the robustness and/or efficiency of the descriptor, selection approach, and recognition framework. Comparisons with the state-of-the-arts validate the superiority of our recognition approach, especially under challenging scenarios.", "title": "" }, { "docid": "ee6461f83cee5fdf409a130d2cfb1839", "text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.", "title": "" }, { "docid": "41539545b3d1f6a90607cc75d1dccf2b", "text": "Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box.", "title": "" }, { "docid": "faf53f190fe226ce14f32f9d44d551b5", "text": "We present a study of how Linux kernel developers respond to bug reports issued by a static analysis tool. We found that developers prefer to triage reports in younger, smaller, and more actively-maintained files ( §2), first address easy-to-fix bugs and defer difficult (but possibly critical) bugs ( §3), and triage bugs in batches rather than individually (§4). Also, although automated tools cannot find many types of bugs, they can be effective at directing developers’ attentions towards parts of the codebase that contain up to 3X more user-reported bugs ( §5). Our insights into developer attitudes towards static analysis tools allow us to make suggestions for improving their usability and effectiveness. We feel that it could be effective to run static analysis tools continuously while programming and before committing code, to rank reports so that those most likely to be triaged are shown to developers first, to show the easiest reports to new developers, to perform deeper analysis on more actively-maintained code, and to use reports as indirect indicators of code quality and importance.", "title": "" }, { "docid": "97cc6d9ed4c1aba0dc09635350a401ee", "text": "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts.\n SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.", "title": "" }, { "docid": "41dc9d6fd6a0550cccac1bc5ba27b11d", "text": "A low-power forwarded-clock I/O transceiver architecture is presented that employs a high degree of output/input multiplexing, supply-voltage scaling with data rate, and low-voltage circuit techniques to enable low-power operation. The transmitter utilizes a 4:1 output multiplexing voltage-mode driver along with 4-phase clocking that is efficiently generated from a passive poly-phase filter. The output driver voltage swing is accurately controlled from 100–200 <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm mV}_{\\rm ppd}$</tex></formula> using a low-voltage pseudo-differential regulator that employs a partial negative-resistance load for improved low frequency gain. 1:8 input de-multiplexing is performed at the receiver equalizer output with 8 parallel input samplers clocked from an 8-phase injection-locked oscillator that provides more than 1UI de-skew range. In the transmitter clocking circuitry, per-phase duty-cycle and phase-spacing adjustment is implemented to allow adequate timing margins at low operating voltages. Fabricated in a general purpose 65 nm CMOS process, the transceiver achieves 4.8–8 Gb/s at 0.47–0.66 pJ/b energy efficiency for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm V}_{\\rm DD}=0.6$</tex></formula>–0.8 V.", "title": "" }, { "docid": "e6665cc046733c66103506cdbb4814d2", "text": "....................................................................... 2 Table of", "title": "" }, { "docid": "6101f4582b1ad0b0306fe3d513940fab", "text": "Although a great deal of media attention has been given to the negative effects of playing video games, relatively less attention has been paid to the positive effects of engaging in this activity. Video games in health care provide ample examples of innovative ways to use existing commercial games for health improvement or surgical training. Tailor-made games help patients be more adherent to treatment regimens and train doctors how to manage patients in different clinical situations. In this review, examples in the scientific literature of commercially available and tailor-made games used for education and training with patients and medical students and doctors are summarized. There is a history of using video games with patients from the early days of gaming in the 1980s, and this has evolved into a focus on making tailor-made games for different disease groups, which have been evaluated in scientific trials more recently. Commercial video games have been of interest regarding their impact on surgical skill. More recently, some basic computer games have been developed and evaluated that train doctors in clinical skills. The studies presented in this article represent a body of work outlining positive effects of playing video games in the area of health care.", "title": "" }, { "docid": "7cf7b6d0ad251b98956a29ad9192cb63", "text": "A method for two dimensional position finding of stationary targets whose bearing measurements suffers from indeterminable bias and random noise has been proposed. The algorithm uses convex optimization to minimize an error function which has been calculated based on circular as well as linear loci of error. Taking into account a number of observations, certain modifications have been applied to the initial crude method so as to arrive at a faster, more accurate method. Simulation results of the method illustrate up to 30% increase in accuracy compared with the well-known least square filter.", "title": "" }, { "docid": "cb702c48a242c463dfe1ac1f208acaa2", "text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.", "title": "" }, { "docid": "afc9fbf2db89a5220c897afcbabe028f", "text": "Evidence for viewpoint-specific image-based object representations have been collected almost entirely using exemplar-specific recognition tasks. Recent results, however, implicate image-based processes in more categorical tasks, for instance when objects contain qualitatively different 3D parts. Although such discriminations approximate class-level recognition. they do not establish whether image-based representations can support generalization across members of an object class. This issue is critical to any theory of recognition, in that one hallmark of human visual competence is the ability to recognize unfamiliar instances of a familiar class. The present study addresses this questions by testing whether viewpoint-specific representations for some members of a class facilitate the recognition of other members of that class. Experiment 1 demonstrates that familiarity with several members of a class of novel 3D objects generalizes in a viewpoint-dependent manner to cohort objects from the same class. Experiment 2 demonstrates that this generalization is based on the degree of familiarity and the degree of geometrical distinctiveness for particular viewpoints. Experiment 3 demonstrates that this generalization is restricted to visually-similar objects rather than all objects learned in a given context. These results support the hypothesis that image-based representations are viewpoint dependent, but that these representations generalize across members of perceptually-defined classes. More generally, these results provide evidence for a new approach to image-based recognition in which object classes are represented as cluster of visually-similar viewpoint-specific representations.", "title": "" }, { "docid": "b4b6b51c8f8a0da586fe66b61711222c", "text": "Although game-tree search works well in perfect-information games, it is less suitable for imperfect-information games such as contract bridge. The lack of knowledge about the opponents' possible moves gives the game tree a very large branching factor, making it impossible to search a signiicant portion of this tree in a reasonable amount of time. This paper describes our approach for overcoming this problem. We represent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. We have tested this approach on declarer play in the game of bridge, in an implementation called Tignum 2. On 5000 randomly generated notrump deals, Tignum 2 beat the strongest commercially available program by 1394 to 1302, with 2304 ties. These results are statistically signiicant at the = 0:05 level. Tignum 2 searched an average of only 8745.6 moves per deal in an average time of only 27.5 seconds per deal on a Sun SPARCstation 10. Further enhancements to Tignum 2 are currently underway.", "title": "" }, { "docid": "03cd6ef0cc0dab9f33b88dd7ae4227c2", "text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.", "title": "" }, { "docid": "97adb3a003347f579706cd01a762bdc9", "text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.", "title": "" }, { "docid": "e29596a39ef50454de3035c5bd80e68a", "text": "A microfluidic device designed to generate monodispersed picoliter to femtoliter sized droplet emulsions at controlled rates is presented. This PDMS microfabricated device utilizes the geometry of the channel junctions in addition to the flow rates to control the droplet sizes. An expanding nozzle is used to control the breakup location of the droplet generation process. The droplet breakup occurs at a fixed point d droplets with s 100 nm can b generation r ©", "title": "" }, { "docid": "dd4a95a6ffdb1a1c5c242b7a5d969d29", "text": "A microstrip antenna with frequency agility and polarization diversity is presented. Commercially available packaged RF microelectrical-mechanical (MEMS) single-pole double-throw (SPDT) devices are used with a novel feed network to provide four states of polarization control; linear-vertical, linear-horizontal, left-hand circular and right-handed circular. Also, hyper-abrupt silicon junction tuning diodes are used to tune the antenna center frequency from 0.9-1.5 GHz. The microstrip antenna is 1 in x 1 in, and is fabricated on a 4 in x 4 in commercial-grade dielectric laminate. To the authors' knowledge, this is the first demonstration of an antenna element with four polarization states across a tunable bandwidth of 1.4:1.", "title": "" } ]
scidocsrr
08e952323708557df37939ab80bf692e
Continuum regression for cross-modal multimedia retrieval
[ { "docid": "6508fc8732fd22fde8c8ac180a2e19e3", "text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.", "title": "" }, { "docid": "0d292d5c1875845408c2582c182a6eb9", "text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation", "title": "" } ]
[ { "docid": "5a74a585fb58ff09c05d807094523fb9", "text": "Deep learning techniques are famous due to Its capability to cope with large-scale data these days. They have been investigated within various of applications e.g., language, graphical modeling, speech, audio, image recognition, video, natural language and signal processing areas. In addition, extensive researches applying machine-learning methods in Intrusion Detection System (IDS) have been done in both academia and industry. However, huge data and difficulties to obtain data instances are hot challenges to machine-learning-based IDS. We show some limitations of previous IDSs which uses classic machine learners and introduce feature learning including feature construction, extraction and selection to overcome the challenges. We discuss some distinguished deep learning techniques and its application for IDS purposes. Future research directions using deep learning techniques for IDS purposes are briefly summarized.", "title": "" }, { "docid": "e08990fec382e1ba5c089d8bc1629bc5", "text": "Goal-oriented spoken dialogue systems have been the most prominent component in todays virtual personal assistants, which allow users to speak naturally in order to finish tasks more efficiently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. The goal of this tutorial is to provide the audience with the developing trend of dialogue systems, and a roadmap to get them started with the related work. The first section motivates the work on conversationbased intelligent agents, in which the core underlying system is task-oriented dialogue systems. The following section describes different approaches using deep learning for each component in the dialogue system and how it is evaluated. The last two sections focus on discussing the recent trends and current challenges on dialogue system technology and summarize the challenges and conclusions. The detailed content is described as follows. 2 Dialogue System Basics This section will motivate the work on conversation-based intelligent agents, in which the core underlying system is task-oriented spoken dialogue systems. The section starts with an overview of the standard pipeline framework for dialogue system illustrated in Figure 1 (Tur and De Mori, 2011). Basic components of a dialog system are automatic speech recognition (ASR), language understanding (LU), dialogue management (DM), and natural language generation (NLG) (Rudnicky et al., 1999; Zue et al., 2000; Zue and Glass, 2000). This tutorial will mainly focus on LU, DM, and NLG parts.", "title": "" }, { "docid": "28531c596a9df30b91d9d1e44d5a7081", "text": "The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyword-based search, reading proceedings of conferences, browsing publication lists of known experts, or checking the references of the papers they are interested. Existing tools for the literature search are suitable for a first-level bibliographic search. However, they do not allow complex second-level searches. In this paper, we present a web service called TheAdvisor (http://theadvisor.osu.edu) which helps the users to build a strong bibliography by extending the document set obtained after a first-level search. The service makes use of the citation graph for recommendation. It also features diversification, relevance feedback, graphical visualization, venue and reviewer recommendation. In this work, we explain the design criteria and rationale we employed to make the TheAdvisor a useful and scalable web service along with a thorough experimental evaluation.", "title": "" }, { "docid": "7d820e831096dac701e7f0526a8a11da", "text": "We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updated by aligning the point cloud model to the camera image. Given a reconstruction made with less than five minutes of video, we achieve below 25 cm translational error and 0.5 degrees rotational error for over 80% of images tested. In contrast to camera-based simultaneous localization and mapping (SLAM) systems, our methods are suitable for handheld use in large outdoor spaces.", "title": "" }, { "docid": "05e754e0567bf6859d7a68446fc81bad", "text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?", "title": "" }, { "docid": "dd1fd4f509e385ea8086a45a4379a8b5", "text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.", "title": "" }, { "docid": "1ed93d114804da5714b7b612f40e8486", "text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.", "title": "" }, { "docid": "d18c77b3d741e1a7ed10588f6a3e75c0", "text": "Given only a few image-text pairs, humans can learn to detect semantic concepts and describe the content. For machine learning algorithms, they usually require a lot of data to train a deep neural network to solve the problem. However, it is challenging for the existing systems to generalize well to the few-shot multi-modal scenario, because the learner should understand not only images and texts but also their relationships from only a few examples. In this paper, we tackle two multi-modal problems, i.e., image captioning and visual question answering (VQA), in the few-shot setting.\n We propose Fast Parameter Adaptation for Image-Text Modeling (FPAIT) that learns to learn jointly understanding image and text data by a few examples. In practice, FPAIT has two benefits. (1) Fast learning ability. FPAIT learns proper initial parameters for the joint image-text learner from a large number of different tasks. When a new task comes, FPAIT can use a small number of gradient steps to achieve a good performance. (2) Robust to few examples. In few-shot tasks, the small training data will introduce large biases in Convolutional Neural Networks (CNN) and damage the learner's performance. FPAIT leverages dynamic linear transformations to alleviate the side effects of the small training set. In this way, FPAIT flexibly normalizes the features and thus reduces the biases during training. Quantitatively, FPAIT achieves superior performance on both few-shot image captioning and VQA benchmarks.", "title": "" }, { "docid": "fd6eea8007c3e58664ded211bfbc52f7", "text": "We present our overall third ranking solution for the KDD Cup 2010 on educational data mining. The goal of the competition was to predict a student’s ability to answer questions correctly, based on historic results. In our approach we use an ensemble of collaborative filtering techniques, as used in the field of recommender systems and adopt them to fit the needs of the competition. The ensemble of predictions is finally blended, using a neural network.", "title": "" }, { "docid": "d1c2c0b74caf85f25d761128ed708e6c", "text": "Nearly all our buildings and workspaces are protected against fire breaks, which may occur due to some fault in the electric circuitries and power sources. The immediate alarming and aid to extinguish the fire in such situations of fire breaks are provided using embedded systems installed in the buildings. But as the area being monitored against such fire threats becomes vast, these systems do not provide a centralized solution. For the protection of such a huge area, like a college campus or an industrial park, a centralized wireless fire control system using Wireless sensor network technology is developed. The system developed connects the five dangers prone zones of the campus with a central control room through a ZigBee communication interface such that in case of any fire break in any of the building, a direct communication channel is developed that will send an immediate signal to the control room. In case if any of the emergency zone lies out of reach of the central node, multi hoping technique is adopted for the effective transmitting of the signal. The five nodes maintains a wireless interlink among themselves as well as with the central node for this purpose. Moreover a hooter is attached along with these nodes to notify the occurrence of any fire break such that the persons can leave the building immediately and with the help of the signal received in the control room, the exact building where the fire break occurred is identified and fire extinguishing is done. The real time system developed is implemented in Atmega32 with temperature, fire and humidity sensors and ZigBee module.", "title": "" }, { "docid": "2ff3d496f0174ffc0e3bd21952c8f0ae", "text": "Each time a latency in responding to a stimulus is measured, we owe a debt to F. C. Donders, who in the mid-19th century made the fundamental discovery that the time required to perform a mental computation reveals something fundamental about how the mind works. Donders expressed the idea in the following simple and optimistic statement about the feasibility of measuring the mind: “Will all quantitative treatment of mental processes be out of the question then? By no means! An important factor seemed to be susceptible to measurement: I refer to the time required for simple mental processes” (Donders, 1868/1969, pp. 413–414). With particular variations of simple stimuli and subjects’ choices, Donders demonstrated that it is possible to bring order to understanding invisible thought processes by computing the time that elapses between stimulus presentation and response production. A more specific observation he offered lies at the center of our own modern understanding of mental operations:", "title": "" }, { "docid": "f64e65df9db7219336eafb20d38bf8cf", "text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.", "title": "" }, { "docid": "a120d11f432017c3080bb4107dd7ea71", "text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.", "title": "" }, { "docid": "581efb9277c3079a0f2bf59949600739", "text": "Artificial Intelligence methods are becoming very popular in medical applications due to high reliability and ease. From the past decades, Artificial Intelligence techniques such as Artificial Neural Networks, Fuzzy Expert Systems, Robotics etc have found an increased usage in disease diagnosis, patient monitoring, disease risk evaluation, predicting effect of new medicines and robotic handling of surgeries. This paper presents an introduction and survey on different artificial intelligence methods used by researchers for the application of diagnosing or predicting Hypertension. Keywords-Hypertension, Artificial Neural Networks, Fuzzy Systems.", "title": "" }, { "docid": "b236003ad282e973b3ebf270894c2c07", "text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.", "title": "" }, { "docid": "1ad08b9ecc0a08f5e0847547c55ea90d", "text": "Text summarization is the process of creating a shorter version of one or more text documents. Automatic text summarization has become an important way of finding relevant information in large text libraries or in the Internet. Extractive text summarization techniques select entire sentences from documents according to some criteria to form a summary. Sentence scoring is the technique most used for extractive text summarization, today. Depending on the context, however, some techniques may yield better results than some others. This paper advocates the thesis that the quality of the summary obtained with combinations of sentence scoring methods depend on text subject. Such hypothesis is evaluated using three different contexts: news, blogs and articles. The results obtained show the validity of the hypothesis formulated and point at which techniques are more effective in each of those contexts studied.", "title": "" }, { "docid": "acd95dfc27228f107fa44b0dc5039b72", "text": "How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an on-line fashion. We have also developed an on-line version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.", "title": "" }, { "docid": "87eed35ce26bf0194573f3ed2e6be7ca", "text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem, because such visualization can reveal deep insights of complex data. However, most of the existing embedding approaches run on an excessively high precision, even when users want to obtain a brief insight from a visualization of large-scale datasets, ignoring the fact that in the end, the outputs are embedded onto a fixed-range pixel-based screen space. Motivated by this observation and directly considering the properties of screen space in an embedding algorithm, we propose Pixel-Aligned Stochastic Neighbor Embedding (PixelSNE), a highly efficient screen resolution-driven 2D embedding method which accelerates Barnes-Hut treebased t-distributed stochastic neighbor embedding (BH-SNE), which is known to be a state-of-the-art 2D embedding method. Our experimental results show a significantly faster running time for PixelSNE compared to BH-SNE for various datasets while maintaining comparable embedding quality.", "title": "" }, { "docid": "9f786e59441784d821da00d07d2fc42e", "text": "Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.", "title": "" } ]
scidocsrr
0a9d4d03ae5a56ee88bcb855ccb97ef2
Supervised Attentions for Neural Machine Translation
[ { "docid": "34964b0f46c09c5eeb962f26465c3ee1", "text": "Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and undertranslation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both alignment and translation quality over NMT without coverage.", "title": "" }, { "docid": "6dce88afec3456be343c6a477350aa49", "text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).", "title": "" }, { "docid": "8acd410ff0757423d09928093e7e8f63", "text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .", "title": "" } ]
[ { "docid": "f6774efff6e22c96a43e377deb630e16", "text": "The emergence of various and disparate social media platforms has opened opportunities for the research on cross-platform media analysis. This provides huge potentials to solve many challenging problems which cannot be well explored in one single platform. In this paper, we investigate into cross-platform social relation and behavior information to address the cold-start friend recommendation problem. In particular, we conduct an in-depth data analysis to examine what information can better transfer from one platform to another and the result demonstrates a strong correlation for the bidirectional relation and common contact behavior between our test platforms. Inspired by the observations, we design a random walk-based method to employ and integrate these convinced social information to boost friend recommendation performance. To validate the effectiveness of our cross-platform social transfer learning, we have collected a cross-platform dataset including 3,000 users with recognized accounts in both Flickr and Twitter. We demonstrate the effectiveness of the proposed friend transfer methods by promising results.", "title": "" }, { "docid": "5a7324f328a7b5db8c3cb1cc9b606cbc", "text": "We consider a multiple-block separable convex programming problem, where the objective function is the sum of m individual convex functions without overlapping variables, and the constraints are linear, aside from side constraints. Based on the combination of the classical Gauss–Seidel and the Jacobian decompositions of the augmented Lagrangian function, we propose a partially parallel splitting method, which differs from existing augmented Lagrangian based splitting methods in the sense that such an approach simplifies the iterative scheme significantly by removing the potentially expensive correction step. Furthermore, a relaxation step, whose computational cost is negligible, can be incorporated into the proposed method to improve its practical performance. Theoretically, we establish global convergence of the new method in the framework of proximal point algorithm and worst-case nonasymptotic O(1/t) convergence rate results in both ergodic and nonergodic senses, where t counts the iteration. The efficiency of the proposed method is further demonstrated through numerical results on robust PCA, i.e., factorizing from incomplete information of an B Junfeng Yang jfyang@nju.edu.cn Liusheng Hou houlsheng@163.com Hongjin He hehjmath@hdu.edu.cn 1 School of Mathematics and Information Technology, Key Laboratory of Trust Cloud Computing and Big Data Analysis, Nanjing Xiaozhuang University, Nanjing 211171, China 2 Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China 3 Department of Mathematics, Nanjing University, Nanjing 210093, China", "title": "" }, { "docid": "27e25565910119837ff0ddf852c8372a", "text": "Controlled hovering of motor driven flapping wing micro aerial vehicles (FWMAVs) is challenging due to its limited control authority, large inertia, vibration produced by wing strokes, and limited components accuracy due to fabrication methods. In this work, we present a hummingbird inspired FWMAV with 12 grams of weight and 20 grams of maximum lift. We present its full non-linear dynamic model including the full inertia tensor, non-linear input mapping, and damping effect from flapping counter torques (FCTs) and flapping counter forces (FCFs). We also present a geometric flight controller to ensure exponentially stable and globally exponential attractive properties. We experimentally demonstrated the vehicle lifting off and hover with attitude stabilization.", "title": "" }, { "docid": "6e848928859248e0597124cee0560e43", "text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.", "title": "" }, { "docid": "f702a8c28184a6d49cd2f29a1e4e7ea4", "text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.", "title": "" }, { "docid": "61da3c6eaa2e140bcd218e1d81a7c803", "text": "Sub-Resolution Assist Feature (SRAF) generation is a very important resolution enhancement technique to improve yield in modern semiconductor manufacturing process. Model- based SRAF generation has been widely used to achieve high accuracy but it is known to be time consuming and it is hard to obtain consistent SRAFs on the same layout pattern configurations. This paper proposes the first ma- chine learning based framework for fast yet consistent SRAF generation with high quality of results. Our technical con- tributions include robust feature extraction, novel feature compaction, model training for SRAF classification and pre- diction, and the final SRAF generation with consideration of practical mask manufacturing constraints. Experimental re- sults demonstrate that, compared with commercial Calibre tool, our machine learning based SRAF generation obtains 10X speed up and comparable performance in terms of edge placement error (EPE) and process variation (PV) band.", "title": "" }, { "docid": "6e678ccfefa93d1d27a36b28ac5737c4", "text": "BACKGROUND\nBiofilm formation is a major virulence factor in different bacteria. Biofilms allow bacteria to resist treatment with antibacterial agents. The biofilm formation on glass and steel surfaces, which are extremely useful surfaces in food industries and medical devices, has always had an important role in the distribution and transmission of infectious diseases.\n\n\nOBJECTIVES\nIn this study, the effect of coating glass and steel surfaces by copper nanoparticles (CuNPs) in inhibiting the biofilm formation by Listeria monocytogenes and Pseudomonas aeruginosa was examined.\n\n\nMATERIALS AND METHODS\nThe minimal inhibitory concentrations (MICs) of synthesized CuNPs were measured against L. monocytogenes and P. aeruginosa by using the broth-dilution method. The cell-surface hydrophobicity of the selected bacteria was assessed using the bacterial adhesion to hydrocarbon (BATH) method. Also, the effect of the CuNP-coated surfaces on the biofilm formation of the selected bacteria was calculated via the surface assay.\n\n\nRESULTS\nThe MICs for the CuNPs according to the broth-dilution method were ≤ 16 mg/L for L. monocytogenes and ≤ 32 mg/L for P. aeruginosa. The hydrophobicity of P. aeruginosa and L. monocytogenes was calculated as 74% and 67%, respectively. The results for the surface assay showed a significant decrease in bacterial attachment and colonization on the CuNP-covered surfaces.\n\n\nCONCLUSIONS\nOur data demonstrated that the CuNPs inhibited bacterial growth and that the CuNP-coated surfaces decreased the microbial count and the microbial biofilm formation. Such CuNP-coated surfaces can be used in medical devices and food industries, although further studies in order to measure their level of toxicity would be necessary.", "title": "" }, { "docid": "beea84b0d96da0f4b29eabf3b242a55c", "text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.", "title": "" }, { "docid": "3c3d8cc7e6a616d46cab7b603f06198c", "text": "PURPOSE\nTo investigate the impact of human papillomavirus (HPV) on the epidemiology of oral squamous cell carcinomas (OSCCs) in the United States, we assessed differences in patient characteristics, incidence, and survival between potentially HPV-related and HPV-unrelated OSCC sites.\n\n\nPATIENTS AND METHODS\nData from nine Surveillance, Epidemiology, and End Results program registries (1973 to 2004) were used to classify OSCCs by anatomic site as potentially HPV-related (n = 17,625) or HPV-unrelated (n = 28,144). Joinpoint regression and age-period-cohort models were used to assess incidence trends. Life-table analyses were used to compare 2-year overall survival for HPV-related and HPV-unrelated OSCCs.\n\n\nRESULTS\nHPV-related OSCCs were diagnosed at younger ages than HPV-unrelated OSCCs (mean ages at diagnosis, 61.0 and 63.8 years, respectively; P < .001). Incidence increased significantly for HPV-related OSCC from 1973 to 2004 (annual percentage change [APC] = 0.80; P < .001), particularly among white men and at younger ages. By contrast, incidence for HPV-unrelated OSCC was stable through 1982 (APC = 0.82; P = .186) and declined significantly during 1983 to 2004 (APC = -1.85; P < .001). When treated with radiation, improvements in 2-year survival across calendar periods were more pronounced for HPV-related OSCCs (absolute increase in survival from 1973 through 1982 to 1993 through 2004 for localized, regional, and distant stages = 9.9%, 23.1%, and 18.6%, respectively) than HPV-unrelated OSCCs (5.6%, 3.1%, and 9.9%, respectively). During 1993 to 2004, for all stages treated with radiation, patients with HPV-related OSCCs had significantly higher survival rates than those with HPV-unrelated OSCCs.\n\n\nCONCLUSION\nThe proportion of OSCCs that are potentially HPV-related increased in the United States from 1973 to 2004, perhaps as a result of changing sexual behaviors. Recent improvements in survival with radiotherapy may be due in part to a shift in the etiology of OSCCs.", "title": "" }, { "docid": "0cc61499ca4eaba9d23214fc7985f71c", "text": "We review the recent progress of the latest 100G to 1T class coherent PON technology using a simplified DSP suitable for forthcoming 5G era optical access systems. The highlight is the presentation of the first demonstration of 100 Gb/s/λ × 8 (800 Gb/s) based PON.", "title": "" }, { "docid": "d22c8390e6ea9ea8c7a84e188cd10ba5", "text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.", "title": "" }, { "docid": "774f1a2403acf459a4eb594c5772a362", "text": "motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4513872c2240390dca8f4b704e606157", "text": "We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.", "title": "" }, { "docid": "534809d7f65a645c7f7d7ab1089c080a", "text": "In this paper, we study the implications of the commonplace assumption that most social media studies make with respect to the nature of message shares (such as retweets) as a predominantly positive interaction. By analyzing two large longitudinal Brazilian Twitter datasets containing 5 years of conversations on two polarizing topics – Politics and Sports, we empirically demonstrate that groups holding antagonistic views can actually retweet each other more often than they retweet other groups. We show that assuming retweets as endorsement interactions can lead to misleading conclusions with respect to the level of antagonism among social communities, and that this apparent paradox is explained in part by the use of retweets to quote the original content creator out of the message’s original temporal context, for humor and criticism purposes. As a consequence, messages diffused on online media can have their polarity reversed over time, what poses challenges for social and computer scientists aiming to classify and track opinion groups on online media. On the other hand, we found that the time users take to retweet a message after it has been originally posted can be a useful signal to infer antagonism in social platforms, and that surges of out-of-context retweets correlate with sentiment drifts triggered by real-world events. We also discuss how such evidences can be embedded in sentiment analysis models.", "title": "" }, { "docid": "b46a967ad85c5b64c0f14f703d385b24", "text": "Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry.", "title": "" }, { "docid": "9cfa58c71360b596694a27eea19f3f66", "text": "Introduction. The use of social media is prevalent among college students, and it is important to understand how social media use may impact students' attitudes and behaviour. Prior studies have shown negative outcomes of social media use, but researchers have not fully discovered or fully understand the processes and implications of these negative effects. This research provides additional scientific knowledge by focussing on mediators of social media use and controlling for key confounding variables. Method. Surveys that captured social media use, various attitudes about academics and life, and personal characteristics were completed by 234 undergraduate students at a large U.S. university. Analysis. We used covariance-based structural equation modelling to analyse the response data. Results. Results indicated that after controlling for self-regulation, social media use was negatively associated with academic self-efficacy and academic performance. Additionally, academic self-efficacy mediated the negative relationship between social media use and satisfaction with life. Conclusion. There are negative relationships between social media use and academic performance, as well as with academic self-efficacy beliefs. Academic self-efficacy beliefs mediate the negative relationship between social media use and satisfaction with life. These relationships are present even when controlling for individuals' levels of self-regulation.", "title": "" }, { "docid": "023285cbd5d356266831fc0e8c176d4f", "text": "The two authorsLakoff, a linguist and Nunez, a psychologistpurport to introduce a new field of study, i.e. \"mathematical idea analysis\", with this book. By \"mathematical idea analysis\", they mean to give a scientifically plausible account of mathematical concepts using the apparatus of cognitive science. This approach is meant to be a contribution to academics and possibly education as it helps to illuminate how we cognitise mathematical concepts, which are supposedly undecipherable and abstruse to laymen. The analysis of mathematical ideas, the authors claim, cannot be done within mathematics, for even metamathematicsrecursive theory, model theory, set theory, higherorder logic still requires mathematical idea analysis in itself! Formalism, by its very nature, voids symbols of their meanings and thus cognition is required to imbue meaning. Thus, there is a need for this new field, in which the authors, if successful, would become pioneers.", "title": "" }, { "docid": "d22e8f2029e114b0c648a2cdfba4978a", "text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.", "title": "" }, { "docid": "87ea9ac29f561c26e4e6e411f5bb538c", "text": "Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-toend deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden – diabetes and mental health – the results show improved modeling and risk prediction accuracy.", "title": "" }, { "docid": "e795381a345bf3cab74ddfd4d4763c1e", "text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.", "title": "" } ]
scidocsrr
a878e2419a221c2d3ea14f442da19ba2
Effects of Website Interactivity on Online Retail Shopping Behavior
[ { "docid": "57b945df75d8cd446caa82ae02074c3a", "text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …", "title": "" }, { "docid": "205ef76e947feb4bddbe86b0835e20b3", "text": "Received: 12 July 2000 Revised: 20 August 2001 : 30 July 2002 Accepted: 15 October 2002 Abstract This paper explores factors that influence consumer’s intentions to purchase online at an electronic commerce website. Specifically, we investigate online purchase intention using two different perspectives: a technology-oriented perspective and a trust-oriented perspective. We summarise and review the antecedents of online purchase intention that have been developed within these two perspectives. An empirical study in which the contributions of both perspectives are investigated is reported. We study the perceptions of 228 potential online shoppers regarding trust and technology and their attitudes and intentions to shop online at particular websites. In terms of relative contributions, we found that the trust-antecedent ‘perceived risk’ and the technology-antecedent ‘perceived ease-of-use’ directly influenced the attitude towards purchasing online. European Journal of Information Systems (2003) 12, 41–48. doi:10.1057/ palgrave.ejis.3000445", "title": "" } ]
[ { "docid": "617bb88fdb8b76a860c58fc887ab2bc4", "text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.", "title": "" }, { "docid": "8666fe5a01f032d744a3e798241a30f6", "text": "Emojis have gone viral on the Internet across platforms and devices. Interwoven into our daily communications, they have become a ubiquitous new language. However, little has been done to analyze the usage of emojis at scale and in depth. Why do some emojis become especially popular while others don’t? How are people using them among the words? In this work, we take the initiative to study the collective usage and behavior of emojis, and specifically, how emojis interact with their context. We base our analysis on a very large corpus collected from a popular emoji keyboard, which contains a full month of inputs from millions of users. Our analysis is empowered by a state-of-the-art machine learning tool that computes the embeddings of emojis and words in a semantic space. We find that emojis with clear semantic meanings are more likely to be adopted. While entity-related emojis are more likely to be used as alternatives to words, sentimentrelated emojis often play a complementary role in a message. Overall, emojis are significantly more prevalent in a senti-", "title": "" }, { "docid": "5a5ae4ab9b802fe6d5481f90a4aa07b7", "text": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis.", "title": "" }, { "docid": "20cfcfde25db033db8d54fe7ae6fcca1", "text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.", "title": "" }, { "docid": "313c68843b2521d553772dd024eec202", "text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.", "title": "" }, { "docid": "972abdbc8667c24ae080eb2ffb7835e9", "text": "Two important cues to female physical attractiveness are body mass index (BMI) and shape. In front view, it seems that BMI may be more important than shape; however, is it true in profile where shape cues may be stronger? There is also the question of whether men and women have the same perception of female physical attractiveness. Some studies have suggested that they do not, but this runs contrary to mate selection theory. This predicts that women will have the same perception of female attractiveness as men do. This allows them to judge their own relative value, with respect to their peer group, and match this value with the value of a prospective mate. To clarify these issues we asked 40 male and 40 female undergraduates to rate a set of pictures of real women (50 in front-view and 50 in profile) for attractiveness. BMI was the primary predictor of attractiveness in both front and profile, and the putative visual cues to BMI showed a higher degree of view-invariance than shape cues such as the waist-hip ratio (WHR). Consistent with mate selection theory, there were no significant differences in the rating of attractiveness by male and female raters.", "title": "" }, { "docid": "b262ea4a0a8880d044c77acc84b0c859", "text": "Online social networks may be important avenues for building and maintaining social capital as adult’s age. However, few studies have explicitly examined the role online communities play in the lives of seniors. In this exploratory study, U.S. seniors were interviewed to assess the impact of Facebook on social capital. Interpretive thematic analysis reveals Facebook facilitates connections to loved ones and may indirectly facilitate bonding social capital. Awareness generated via Facebook often lead to the sharing and receipt of emotional support via other channels. As such, Facebook acted as a catalyst for increasing social capital. The implication of “awareness” as a new dimension of social capital theory is discussed. Additionally, Facebook was found to have potential negative impacts on seniors’ current relationships due to open access to personal information. Finally, common concerns related to privacy, comfort with technology, and inappropriate content were revealed.", "title": "" }, { "docid": "7ea3d3002506e0ea6f91f4bdab09c2d5", "text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.", "title": "" }, { "docid": "1527c70d0b78a3d2aa6886282425c744", "text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.", "title": "" }, { "docid": "5601a0da8cfaf42d30b139c535ae37db", "text": "This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.", "title": "" }, { "docid": "8758425824753fea372eeeeb18ee5856", "text": "By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.", "title": "" }, { "docid": "e7e9d6054a61a1f4a3ab7387be28538a", "text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.", "title": "" }, { "docid": "2c7fe5484b2184564d71a03f19188251", "text": "This paper focuses on running scans in a main memory data processing system at \"bare metal\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving/V which looks like a columnar organization but at the bit level, and BitWeaving/H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.", "title": "" }, { "docid": "3aca00d6a5038876340b1fbe08e5ddb6", "text": "People who design, use, and are affected by autonomous artificially intelligent agents want to be able to trust such agents—that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc and have not been formally related to each other or to formal trust models. This article presents a survey of algorithmic assurances, i.e., programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.", "title": "" }, { "docid": "08d1a9f3edc449ff08b45caaaf56f6ad", "text": "Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.", "title": "" }, { "docid": "c19bc89db255ecf88bc1514d8bd7d018", "text": "Fulfilling the requirements of point-of-care testing (POCT) training regarding proper execution of measurements and compliance with internal and external quality control specifications is a great challenge. Our aim was to compare the values of the highly critical parameter hemoglobin (Hb) determined with POCT devices and central laboratory analyzer in the highly vulnerable setting of an emergency department in a supra maximal care hospital to assess the quality of POCT performance. In 2548 patients, Hb measurements using POCT devices (POCT-Hb) were compared with Hb measurements performed at the central laboratory (Hb-ZL). Additionally, sub collectives (WHO anemia classification, patients with Hb <8 g/dl and suprageriatric patients (age >85y.) were analyzed. Overall, the correlation between POCT-Hb and Hb-ZL was highly significant (r = 0.96, p<0.001). Mean difference was -0.44g/dl. POCT-Hb values tended to be higher than Hb-ZL values (t(2547) = 36.1, p<0.001). Standard deviation of the differences was 0.62 g/dl. Only in 26 patients (1%), absolute differences >2.5g/dl occurred. McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition for male, female and total patients (♂ p<0.001; ♀ p<0.001, total p<0.001). Hb-ZL resulted significantly more often in anemia diagnosis. In samples with Hb<8g/dl, McNemar´s test yielded no significant difference (p = 0.169). In suprageriatric patients, McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition in male, female and total patients (♂ p<0.01; ♀ p = 0.002, total p<0.001). The difference between Hb-ZL and POCT-Hb with Hb<8g/dl was not statistically significant (<8g/dl, p = 1.000). Overall, we found a highly significant correlation between the analyzed hemoglobin concentration measurement methods, i.e. POCT devices and at the central laboratory. The results confirm the successful implementation of the presented POCT concept. Nevertheless some limitations could be identified in anemic patients stressing the importance of carefully examining clinically implausible results.", "title": "" }, { "docid": "7f067f869481f06e865880e1d529adc8", "text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.", "title": "" }, { "docid": "fc9babe40365e5dc943fccf088f7a44f", "text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.", "title": "" }, { "docid": "a53f26ef068d11ea21b9ba8609db6ddf", "text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fd59754c40f05710496d3b9738f97e47", "text": "The extent to which mental health consumers encounter stigma in their daily lives is a matter of substantial importance for their recovery and quality of life. This article summarizes the results of a nationwide survey of 1,301 mental health consumers concerning their experience of stigma and discrimination. Survey results and followup interviews with 100 respondents revealed experience of stigma from a variety of sources, including communities, families, churches, coworkers, and mental health caregivers. The majority of respondents tended to try to conceal their disorders and worried a great deal that others would find out about their psychiatric status and treat them unfavorably. They reported discouragement, hurt, anger, and lowered self-esteem as results of their experiences, and they urged public education as a means for reducing stigma. Some reported that involvement in advocacy and speaking out when stigma and discrimination were encountered helped them to cope with stigma. Limitations to generalization of results include the self-selection, relatively high functioning of participants, and respondent connections to a specific advocacy organization-the National Alliance for the Mentally Ill.", "title": "" } ]
scidocsrr
7727c17c6bb7423759ec4ff377681fb4
Facial Expression Recognition using Convolutional Neural Networks: State of the Art
[ { "docid": "dfacd79df58a78433672f06fdb10e5a2", "text": "“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.", "title": "" } ]
[ { "docid": "e510140bfc93089e69cb762b968de5e9", "text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.", "title": "" }, { "docid": "751843f6085ba854dc75d9a6828bed13", "text": "With the developments in information technology and improvements in communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals through Internet or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on Artificial Neural Networks (ANN) and Logistic Regression (LR) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of ANN and LR methods in credit card fraud detection with a real data set.", "title": "" }, { "docid": "8f660dd12e7936a556322f248a9e2a2a", "text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results", "title": "" }, { "docid": "d2694577861e75535e59e316bd6a9015", "text": "Despite being a new term, ‘fake news’ has evolved rapidly. This paper argues that it should be reserved for cases of deliberate presentation of (typically) false or misleading claims as news, where these are misleading by design. The phrase ‘by design’ here refers to systemic features of the design of the sources and channels by which fake news propagates and, thereby, manipulates the audience’s cognitive processes. This prospective definition is then tested: first, by contrasting fake news with other forms of public disinformation; second, by considering whether it helps pinpoint conditions for the (recent) proliferation of fake news. Résumé: En dépit de son utilisation récente, l’expression «fausses nouvelles» a évolué rapidement. Cet article soutient qu'elle devrait être réservée aux présentations intentionnelles d’allégations (typiquement) fausses ou trompeuses comme si elles étaient des nouvelles véridiques et où elles sont faussées à dessein. L'expression «à dessein» fait ici référence à des caractéristiques systémiques de la conception des sources et des canaux par lesquels les fausses nouvelles se propagent et par conséquent, manipulent les processus cognitifs du public. Cette définition prospective est ensuite mise à l’épreuve: d'abord, en opposant les fausses nouvelles à d'autres formes de désinformation publique; deuxièmement, en examinant si elle aide à cerner les conditions de la prolifération (récente) de fausses nou-", "title": "" }, { "docid": "627587e2503a2555846efb5f0bca833b", "text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.", "title": "" }, { "docid": "6db737f9042631ddda9bae7c89b00701", "text": "A self-assessment of time management is developed for middle-school students. A sample of entering seventh-graders (N = 814) from five states across the USA completed this instrument, with 340 students retested 6 months later. Exploratory and confirmatory factor analysis suggested two factors (i.e., Meeting Deadlines and Planning) that adequately explain the variance in time management for this age group. Scales show evidence of reliability and validity; with high internal consistency, reasonable consistency of factor structure over time, moderate to high correlations with Conscientiousness, low correlations with the remaining four personality dimensions of the Big Five, and reasonable prediction of students’ grades. Females score significantly higher on both factors of time management, with gender differences in Meeting Deadlines (but not Planning) mediated by Conscientiousness. Potential applications of the instrument for evaluation, diagnosis, and remediation in educational settings are discussed. 2009 Elsevier Ltd. All rights reserved. 1. The assessment of time management in middle-school students In our technologically enriched society, individuals are constantly required to multitask, prioritize, and work against deadlines in a timely fashion (Orlikowsky & Yates, 2002). Time management has caught the attention of educational researchers, industrial organizational psychologists, and entrepreneurs, for its possible impact on academic achievement, job performance, and quality of life (Macan, 1994). However, research on time management has not kept pace with this enthusiasm, with extant investigations suffering from a number of problems. Claessens, Van Eerde, Rutte, and Roe’s (2007) review of the literature suggest that there are three major limitations to research on time management. First, many measures of time management have limited validity evidence. Second, many studies rely solely on one-shot self-report assessment, such that evidence for a scale’s generalizability over time cannot be collected. Third, school (i.e., K-12) populations have largely been ignored. For example, all studies in the Claessens et al. (2007) review focus on adult workplace samples (e.g., teachers, engineers) or university students, rather than students in K-12. The current study involves the development of a time management assessment tailored specifically to middle-school students (i.e., adolescents in the sixth to eighth grade of schooling). Time management may be particularly important at the onset of adolescence for three reasons. First, the possibility of early identification and remediation of poor time management practices. Second, the transition into secondary education, from a learning environment involving one teacher to one of time-tabled classes for different subjects with different teachers setting assignments and tests that may occur contiguously. Successfully navigating this new learning environment requires the development of time management skills. Third, adolescents use large amounts of their discretionary time on television, computer gaming, internet use, and sports: Average estimates are 3=4 and 2=4 h per day for seventh-grade boys and girls, respectively (Van den Bulck, 2004). With less time left to do more administratively complex schoolwork, adolescents clearly require time management skills to succeed academically. 1.1. Definitions and assessments of time management Time management has been defined and operationalized in several different ways: As a means for monitoring and controlling time, as setting goals in life and keeping track of time use, as prioritizing goals and generating tasks from the goals, and as the perception of a more structured and purposive life (e.g., Bond & Feather, 1988; Britton & Tesser, 1991; Burt & Kemp, 1994; Eilam & Aharon, 2003). The various definitions all converge on the same essential element: The completion of tasks within an expected timeframe while maintaining outcome quality, through mechanisms such as planning, organizing, prioritizing, or multitasking. To the same effect, Claessens et al. (2007) defined time management as ‘‘behaviors that aim at achieving an effective use of time while performing certain goal-directed activities” (p. 36). Four instruments have been used to assess time management in adults: The Time Management Behavior Scale (TMBS; 0191-8869/$ see front matter 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2009.02.018 * Corresponding author. Tel.: +1 609 734 1049. E-mail address: lliu@ets.org (O.L. Liu). Personality and Individual Differences 47 (2009) 174–179", "title": "" }, { "docid": "9f7099655f70ff203c16802903e6acdc", "text": "Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.", "title": "" }, { "docid": "c548cbfc3b1630392acd504b6e854c03", "text": "Much of capital market research in accounting over the past 20 years has assumed that the price adjustment process to information is instantaneous and/or trivial. This assumption has had an enormous influence on the way we select research topics, design empirical tests, and interpret research findings. In this discussion, I argue that price discovery is a complex process, deserving of more attention. I highlight significant problems associated with a na.ıve view of market efficiency, and advocate a more general model involving noise traders. Finally, I discuss the implications of recent evidence against market efficiency for future research. r 2001 Elsevier Science B.V. All rights reserved. JEL classification: M4; G0; B2; D8", "title": "" }, { "docid": "6639c05f14e220f4555c664b0c7b0466", "text": "Previous attempts for data augmentation are designed manually, and the augmentation policies are dataset-specific. Recently, an automatic data augmentation approach, named AutoAugment, is proposed using reinforcement learning. AutoAugment searches for the augmentation polices in the discrete search space, which may lead to a sub-optimal solution. In this paper, we employ the Augmented Random Search method (ARS) to improve the performance of AutoAugment. Our key contribution is to change the discrete search space to continuous space, which will improve the searching performance and maintain the diversities between sub-policies. With the proposed method, state-of-the-art accuracies are achieved on CIFAR-10, CIFAR-100, and ImageNet (without additional data). Our code is available at https://github.com/gmy2013/ARS-Aug.", "title": "" }, { "docid": "2781df07db142da8eefbe714631a59b2", "text": "Snapchat is a social media platform that allows users to send images, videos, and text with a specified amount of time for the receiver(s) to view the content before it becomes permanently inaccessible to the receiver. Using focus group methodology and in-depth interviews, the current study sought to understand young adult (18e23 years old; n 1⁄4 34) perceptions of how Snapchat behaviors influenced their interpersonal relationships (family, friends, and romantic). Young adults indicated that Snapchat served as a double-edged swordda communication modality that could lead to relational challenges, but also facilitate more congruent communication within young adult interpersonal relationships. © 2016 Elsevier Ltd. All rights reserved. Technology is now a regular part of contemporary young adult (18e25 years old) life (Coyne, Padilla-Walker, & Howard, 2013; Vaterlaus, Jones, Patten, & Cook, 2015). With technological convergence (i.e. accessibility of multiple media on one device; Brown & Bobkowski, 2011) young adults can access both entertainment media (e.g., television, music) and social media (e.g., social networking, text messaging) on a single device. Among adults, smartphone ownership is highest among young adults (85% of 18e29 year olds; Smith, 2015). Perrin (2015) reported that 90% of young adults (ages 18e29) use social media. Facebook remains the most popular social networking platform, but several new social media apps (i.e., applications) have begun to gain popularity among young adults (e.g., Twitter, Instagram, Pinterest; Duggan, Ellison, Lampe, Lenhart, & Madden, 2015). Considering the high frequency of social media use, Subrahmanyam and Greenfield (2008) have advocated for more research on how these technologies influence interpersonal relationships. The current exploratory study aterlaus), Kathryn_barnett@ (C. Roche), youngja2@unk. was designed to understand the perceived role of Snapchat (see www.snapchat.com) in young adults' interpersonal relationships (i.e. family, social, and romantic). 1. Theoretical framework Uses and Gratifications Theory (U&G) purports that media and technology users are active, self-aware, and goal directed (Katz, Blumler, & Gurevitch, 1973). Technology consumers link their need gratification with specific technology options, which puts different technology sources in competition with one another to satisfy a consumer's needs. Since the emergence of U&G nearly 80 years ago, there have been significant advances in media and technology, which have resulted in many more media and technology options for consumers (Ruggiero, 2000). Writing about the internet and U&G in 2000, Roggiero forecasted: “If the internet is a technology that many predict will be genuinely transformative, it will lead to profound changes in media users' personal and social habits and roles” (p.28). Advances in accessibility to the internet and the development of social media, including Snapchat, provide support for the validity of this prediction. Despite the advances in technology, the needs users seek to gratify are likely more consistent over time. Supporting this point Katz, Gurevitch, and Haas J.M. Vaterlaus et al. / Computers in Human Behavior 62 (2016) 594e601 595", "title": "" }, { "docid": "88abea475884eeec1049a573d107c6c9", "text": "This paper extends the traditional pinhole camera projection geometry used in computer graphics to a more realistic camera model which approximates the effects of a lens and an aperture function of an actual camera. This model allows the generation of synthetic images which have a depth of field and can be focused on an arbitrary plane; it also permits selective modeling of certain optical characteristics of a lens. The model can be expanded to include motion blur and special-effect filters. These capabilities provide additional tools for highlighting important areas of a scene and for portraying certain physical characteristics of an object in an image.", "title": "" }, { "docid": "14c278147defd19feb4e18d31a3fdcfb", "text": "Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.", "title": "" }, { "docid": "a9baecb9470242c305942f7bc98494ab", "text": "This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.", "title": "" }, { "docid": "1a5c009f059ea28fd2d692d1de4eb913", "text": "We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.", "title": "" }, { "docid": "39bf7e3a8e75353a3025e2c0f18768f9", "text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.", "title": "" }, { "docid": "d0f187a8f7f6d4a6f8061a486f89c6bd", "text": "The science of ecology was born from the expansive curiosity of the biologists of the late 19th century, who wished to understand the distribution, abundance and interactions of the earth's organisms. Why do we have so many species, and why not more, they asked--and what causes them to be distributed as they are? What are the characteristics of a biological community that cause it to recover in a particular way after a disturbance?", "title": "" }, { "docid": "edb0442d3e3216a5e1add3a03b05858a", "text": "The resilience perspective is increasingly used as an approach for understanding the dynamics of social–ecological systems. This article presents the origin of the resilience perspective and provides an overview of its development to date. With roots in one branch of ecology and the discovery of multiple basins of attraction in ecosystems in the 1960–1970s, it inspired social and environmental scientists to challenge the dominant stable equilibrium view. The resilience approach emphasizes non-linear dynamics, thresholds, uncertainty and surprise, how periods of gradual change interplay with periods of rapid change and how such dynamics interact across temporal and spatial scales. The history was dominated by empirical observations of ecosystem dynamics interpreted in mathematical models, developing into the adaptive management approach for responding to ecosystem change. Serious attempts to integrate the social dimension is currently taking place in resilience work reflected in the large numbers of sciences involved in explorative studies and new discoveries of linked social–ecological systems. Recent advances include understanding of social processes like, social learning and social memory, mental models and knowledge–system integration, visioning and scenario building, leadership, agents and actor groups, social networks, institutional and organizational inertia and change, adaptive capacity, transformability and systems of adaptive governance that allow for management of essential ecosystem services. r 2006 Published by Elsevier Ltd.", "title": "" }, { "docid": "f0fa6c2b9216192ed0cf419e9f3c9666", "text": "Primary task of a recommender system is to improve user’s experience by recommending relevant and interesting items to the users. To this effect, diversity in item suggestion is as important as the accuracy of recommendations. Existing literature aimed at improving diversity primarily suggests a 2-stage mechanism – an existing CF scheme for rating prediction, followed by a modified ranking strategy. This approach requires heuristic selection of parameters and ranking strategies. Also most works focus on diversity from either the user or system’s perspective. In this work, we propose a single stage optimization based solution to achieve high diversity while maintaining requisite levels of accuracy. We propose to incorporate additional diversity enhancing constraints, in the matrix factorization model for collaborative filtering. However, unlike traditional MF scheme generating dense user and item latent factor matrices, our base MF model recovers a dense user and a sparse item latent factor matrix; based on a recent work. The idea is motivated by the fact that although a user will demonstrate some affinity towards all latent factors, an item will never possess all features; thereby yielding a sparse structure. We also propose an algorithm for our formulation. The superiority of our model over existing state of the art techniques is demonstrated by the results of experiments conducted on real world movie database. © 2016 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "dd97e87fee154f610e406c1cf9170abe", "text": "Magnetically-driven micrometer to millimeter-scale robotic devices have recently shown great capabilities for remote applications in medical procedures, in microfluidic tools and in microfactories. Significant effort recently has been on the creation of mobile or stationary devices with multiple independently-controllable degrees of freedom (DOF) for multiagent or complex mechanism motions. In most applications of magnetic microrobots, however, the relatively large distance from the field generation source and the microscale devices results in controlling magnetic field signals which are applied homogeneously over all agents. While some progress has been made in this area allowing up to six independent DOF to be individually commanded, there has been no rigorous effort in determining the maximum achievable number of DOF for systems with homogeneous magnetic field input. In this work, we show that this maximum is eight and we introduce the theoretical basis for this conclusion, relying on the number of independent usable components in a magnetic field at a point. In order to verify the claim experimentally, we develop a simple demonstration mechanism with 8 DOF designed specifically to show independent actuation. Using this mechanism with $500 \\mu \\mathrm{m}$ magnetic elements, we demonstrate eight independent motions of 0.6 mm with 8.6 % coupling using an eight coil system. These results will enable the creation of richer outputs in future microrobotic devices.", "title": "" }, { "docid": "b1272039194d07ff9b7568b7f295fbfb", "text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.", "title": "" } ]
scidocsrr
2dd3f5c65f29db879483195fa0d87466
A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict
[ { "docid": "8cffd66433d70a04b79f421233f2dcf2", "text": "By engaging in construction-based robotics activities, children as young as four can play to learn a range of concepts. The TangibleK Robotics Program paired developmentally appropriate computer programming and robotics tools with a constructionist curriculum designed to engage kindergarten children in learning computational thinking, robotics, programming, and problem-solving. This paper documents three kindergarten classrooms’ exposure to computer programming concepts and explores learning outcomes. Results point to strengths of the curriculum and areas where further redesign of the curriculum and technologies would be appropriate. Overall, the study demonstrates that kindergartners were both interested in and able to learn many aspects of robotics, programming, and computational thinking with the TangibleK curriculum design. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "211b858db72c962efaedf66f2ed9479d", "text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.", "title": "" } ]
[ { "docid": "3c4f19544e9cc51d307c6cc9aea63597", "text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.", "title": "" }, { "docid": "3a2168e93c1f8025e93de1a7594e17d5", "text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.", "title": "" }, { "docid": "865da040d64e56774f20d1f856aa8845", "text": "on Walden Pond (Massachusetts, USA) using diatoms and stable isotopes Dörte Köster,1∗ Reinhard Pienitz,1∗ Brent B. Wolfe,2 Sylvia Barry,3 David R. Foster,3 and Sushil S. Dixit4 Paleolimnology-Paleoecology Laboratory, Centre d’études nordiques, Department of Geography, Université Laval, Québec, Québec, G1K 7P4, Canada Department of Geography and Environmentals Studies, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada Harvard University, Harvard Forest, Post Office Box 68, Petersham, Massachusetts, 01366-0068, USA Environment Canada, National Guidelines & Standards Office, 351 St. Joseph Blvd., 8th Floor, Ottawa, Ontario, K1A 0H3, Canada ∗Corresponding authors: E-mail: doerte.koster.1@ulaval.ca, reinhard.pienitz@cen.ulaval.ca", "title": "" }, { "docid": "0cd863fc634b75f1b93137698d42080d", "text": "Prior research has established that peer tutors can benefit academically from their tutoring experiences. However, although tutor learning has been observed across diverse settings, the magnitude of these gains is often underwhelming. In this review, the authors consider how analyses of tutors’ actual behaviors may help to account for variation in learning outcomes and how typical tutoring behaviors may create or undermine opportunities for learning. The authors examine two tutoring activities that are commonly hypothesized to support tutor learning: explaining and questioning. These activities are hypothesized to support peer tutors’ learning via reflective knowledge-building, which includes self-monitoring of comprehension, integration of new and prior knowledge, and elaboration and construction of knowledge. The review supports these hypotheses but also finds that peer tutors tend to exhibit a pervasive knowledge-telling bias. Peer tutors, even when trained, focus more on delivering knowledge rather than developing it. As a result, the true potential for tutor learning may rarely be achieved. The review concludes by offering recommendations for how future research can utilize tutoring process data to understand how tutors learn and perhaps develop new training methods.", "title": "" }, { "docid": "bd6375ea90153d2e5b2846930922fc6e", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) have the potential to be valuable clinical tools. However, the varied nature of BCIs, combined with the large number of laboratories participating in BCI research, makes uniform performance reporting difficult. To address this situation, we present a tutorial on performance measurement in BCI research.\n\n\nAPPROACH\nA workshop on this topic was held at the 2013 International BCI Meeting at Asilomar Conference Center in Pacific Grove, California. This paper contains the consensus opinion of the workshop members, refined through discussion in the following months and the input of authors who were unable to attend the workshop.\n\n\nMAIN RESULTS\nChecklists for methods reporting were developed for both discrete and continuous BCIs. Relevant metrics are reviewed for different types of BCI research, with notes on their use to encourage uniform application between laboratories.\n\n\nSIGNIFICANCE\nGraduate students and other researchers new to BCI research may find this tutorial a helpful introduction to performance measurement in the field.", "title": "" }, { "docid": "cb1048d4bffb141074a4011279054724", "text": "Question Generation (QG) is the task of generating reasonable questions from a text. It is a relatively new research topic and has its potential usage in intelligent tutoring systems and closed-domain question answering systems. Current approaches include template or syntax based methods. This thesis proposes a novel approach based entirely on semantics. Minimal Recursion Semantics (MRS) is a meta-level semantic representation with emphasis on scope underspecification. With the English Resource Grammar and various tools from the DELPH-IN community, a natural language sentence can be interpreted as an MRS structure by parsing, and an MRS structure can be realized as a natural language sentence through generation. There are three issues emerging from semantics-based QG: (1) sentence simplification for complex sentences, (2) question transformation for declarative sentences, and (3) generation ranking. Three solutions are also proposed: (1) MRS decomposition through a Connected Dependency MRS Graph, (2) MRS transformation from declarative sentences to interrogative sentences, and (3) question ranking by simple language models atop a MaxEnt-based model. The evaluation is conducted in the context of the Question Generation Shared Task and Generation Challenge 2010. The performance of proposed method is compared against other syntax and rule based systems. The result also reveals the challenges of current research on question generation and indicates direction for future work.", "title": "" }, { "docid": "6ae9bfc681e2a9454196f4aa0c49a4da", "text": "Previous research has indicated that exposure to traditional media (i.e., television, film, and print) predicts the likelihood of internalization of a thin ideal; however, the relationship between exposure to internet-based social media on internalization of this ideal remains less understood. Social media differ from traditional forms of media by allowing users to create and upload their own content that is then subject to feedback from other users. This meta-analysis examined the association linking the use of social networking sites (SNSs) and the internalization of a thin ideal in females. Systematic searches were performed in the databases: PsychINFO, PubMed, Web of Science, Communication and Mass Media Complete, and ProQuest Dissertations and Theses Global. Six studies were included in the meta-analysis that yielded 10 independent effect sizes and a total of 1,829 female participants ranging in age from 10 to 46 years. We found a positive association between extent of use of SNSs and extent of internalization of a thin ideal with a small to moderate effect size (r = 0.18). The positive effect indicated that more use of SNSs was associated with significantly higher internalization of a thin ideal. A comparison was also made between study outcomes measuring broad use of SNSs and outcomes measuring SNS use solely as a function of specific appearance-related features (e.g., posting or viewing photographs). The use of appearance-related features had a stronger relationship with the internalization of a thin ideal than broad use of SNSs. The finding suggests that the ability to interact with appearance-related features online and be an active participant in media creation is associated with body image disturbance. Future research should aim to explore the way SNS users interact with the media posted online and the relationship linking the use of specific appearance features and body image disturbance.", "title": "" }, { "docid": "314ffaaf39e2345f90e85fc5c5fdf354", "text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.", "title": "" }, { "docid": "0b8f4d14483d8fca51f882759f3194ad", "text": "Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.", "title": "" }, { "docid": "3f7684d107f22cb6e8a3006249d8582f", "text": "Substrate Integrated Waveguide has been an emerging technology for the realization of microwave and millimeter wave regions. It is the planar form of the conventional rectangular waveguide. It has profound applications at higher frequencies, since prevalent platforms like microstrip and coplanar waveguide have loss related issues. This paper discusses basic concepts of SIW, design aspects and their applications to leaky wave antennas. A brief overview of recent works on Substrate integrated Waveguide based Leaky Wave Antennas has been provided.", "title": "" }, { "docid": "975d1b5edfc68e8041794db9cc50d0d2", "text": "I’ve taken to writing this series of posts on a statistical view of deep learning with two principal motivations in mind. The first was as a personal exercise to make concrete and to test the limits of the way that I think about and use deep learning in my every day work. The second, was to highlight important statistical connections and implications of deep learning that I have not seen made in the popular courses, reviews and books on deep learning, but which are extremely important to keep in mind. This document forms a collection of these essays originally posted at blog.shakirm.com.", "title": "" }, { "docid": "4cfcbac8ec942252b79f2796fa7490f0", "text": "Over the next few years the amount of biometric data being at the disposal of various agencies and authentication service providers is expected to grow significantly. Such quantities of data require not only enormous amounts of storage but unprecedented processing power as well. To be able to face this future challenges more and more people are looking towards cloud computing, which can address these challenges quite effectively with its seemingly unlimited storage capacity, rapid data distribution and parallel processing capabilities. Since the available literature on how to implement cloud-based biometric services is extremely scarce, this paper capitalizes on the most important challenges encountered during the development work on biometric services, presents the most important standards and recommendations pertaining to biometric services in the cloud and ultimately, elaborates on the potential value of cloud-based biometric solutions by presenting a few existing (commercial) examples. In the final part of the paper, a case study on fingerprint recognition in the cloud and its integration into the e-learning environment Moodle is presented.", "title": "" }, { "docid": "0965f4f7b820f9561710837c7bb7b4c1", "text": "With the success of image classification problems, deep learning is expanding its application areas. In this paper, we apply deep learning to decode a polar code. As an initial step for memoryless additive Gaussian noise channel, we consider a deep feed-forward neural network and investigate its decoding performances with respect to numerous configurations: the number of hidden layers, the number of nodes for each layer, and activation functions. Generally, the higher complex network yields a better performance. Comparing the performances of regular list decoding, we provide a guideline for the configuration parameters. Although the training of deep learning may require high computational complexity, it should be noted that the field application of trained networks can be accomplished at a low level complexity. Considering the level of performance and complexity, we believe that deep learning is a competitive decoding tool.", "title": "" }, { "docid": "b045350bfb820634046bff907419d1bf", "text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "title": "" }, { "docid": "1ca92ec69901cda036fce2bb75512019", "text": "Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpose Crawling, Focused crawling and Distributed Crawling. In this paper, the applicability of Web Crawler in the field of web search and a review on Web Crawler to different problem domains in web search is discussed.", "title": "" }, { "docid": "0e08bd9133a46b15adec11d961eeed3f", "text": "This article presents a review of recent literature of intersection behavior analysis for three types of intersection participants; vehicles, drivers, and pedestrians. In this survey, behavior analysis of each participant group is discussed based on key features and elements used for intersection design, planning and safety analysis. Different methods used for data collection, behavior recognition and analysis are reviewed for each group and a discussion is provided on the state of the art along with challenges and future research directions in the field.", "title": "" }, { "docid": "e9eefe7d683a8b02a8456cc5ff0ebe9d", "text": "The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.", "title": "" }, { "docid": "f1710683991c33e146a48ed4f08c7ae3", "text": "The rapid growth of social media, especially Twitter in Indonesia, has produced a large amount of user generated texts in the form of tweets. Since Twitter only provides the name and location of its users, we develop a classification system that predicts latent attributes of Twitter user based on his tweets. Latent attribute is an attribute that is not stated directly. Our system predicts age and job attributes of Twitter users that use Indonesian language. Classification model is developed by employing lexical features and three learning algorithms (Naïve Bayes, SVM, and Random Forest). Based on the experimental results, it can be concluded that the SVM method produces the best accuracy for balanced data.", "title": "" }, { "docid": "6ad5035563dc8edf370772a432f6fea8", "text": "We employ the new geometric active contour models, previously formulated, for edge detection and segmentation of magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound medical imagery. Our method is based on defining feature-based metrics on a given image which in turn leads to a novel snake paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus, the snake is attracted very quickly and efficiently to the desired feature.", "title": "" } ]
scidocsrr
35dc79435be5fb76fe57d5813197c79b
A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task
[ { "docid": "565941db0284458e27485d250493fd2a", "text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.", "title": "" } ]
[ { "docid": "be8815170248d7635a46f07c503e32a3", "text": "ÐStochastic discrimination is a general methodology for constructing classifiers appropriate for pattern recognition. It is based on combining arbitrary numbers of very weak components, which are usually generated by some pseudorandom process, and it has the property that the very complex and accurate classifiers produced in this way retain the ability, characteristic of their weak component pieces, to generalize to new data. In fact, it is often observed, in practice, that classifier performance on test sets continues to rise as more weak components are added, even after performance on training sets seems to have reached a maximum. This is predicted by the underlying theory, for even though the formal error rate on the training set may have reached a minimum, more sophisticated measures intrinsic to this method indicate that classifier performance on both training and test sets continues to improve as complexity increases. In this paper, we begin with a review of the method of stochastic discrimination as applied to pattern recognition. Through a progression of examples keyed to various theoretical issues, we discuss considerations involved with its algorithmic implementation. We then take such an algorithmic implementation and compare its performance, on a large set of standardized pattern recognition problems from the University of California Irvine, and Statlog collections, to many other techniques reported on in the literature, including boosting and bagging. In doing these studies, we compare our results to those reported in the literature by the various authors for the other methods, using the same data and study paradigms used by them. Included in this paper is an outline of the underlying mathematical theory of stochastic discrimination and a remark concerning boosting, which provides a theoretical justification for properties of that method observed in practice, including its ability to generalize. Index TermsÐPattern recognition, classification algorithms, stochastic discrimination, SD.", "title": "" }, { "docid": "78c89f8aec24989737575c10b6bbad90", "text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.", "title": "" }, { "docid": "e43d32bdad37002f70d797dd3d5bd5eb", "text": "Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into “slices”, and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/.", "title": "" }, { "docid": "ce901f6509da9ab13d66056319c15bd8", "text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.", "title": "" }, { "docid": "2eaa686e4808b3c613a5061dc5bb14a7", "text": "To date, there is little information on the impact of more aggressive treatment regimen such as BEACOPP (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone) on the fertility of male patients with Hodgkin lymphoma (HL). We evaluated the impact of BEACOPP regimen on fertility status in 38 male patients with advanced-stage HL enrolled into trials of the German Hodgkin Study Group (GHSG). Before treatment, 6 (23%) patients had normozoospermia and 20 (77%) patients had dysspermia. After treatment, 34 (89%) patients had azoospermia, 4 (11%) had other dysspermia, and no patients had normozoospermia. There was no difference in azoospermia rate between patients treated with BEACOPP baseline and those given BEACOPP escalated (93% vs 87%, respectively; P > .999). After treatment, most of patients (93%) had abnormal values of follicle-stimulating hormone, whereas the number of patients with abnormal levels of testosterone and luteinizing hormone was less pronounced-57% and 21%, respectively. In univariate analysis, none of the evaluated risk factors (ie, age, clinical stage, elevated erythrocyte sedimentation rate, B symptoms, large mediastinal mass, extranodal disease, and 3 or more lymph nodes) was statistically significant. Male patients with HL are at high risk of infertility after treatment with BEACOPP.", "title": "" }, { "docid": "ee07cf061a1a3b7283c22434dcabd4eb", "text": "Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low-to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimers brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimers disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimers disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimers subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "ff0d9abbfce64e83576d7e0eb235a46b", "text": "For multi-copter unmanned aerial vehicles (UAVs) sensing of the actual altitude is an important task. Many functions providing increased flight safety and easy maneuverability rely on altitude data. Commonly used sensors provide the altitude only relative to the starting position, or are limited in range and/or resolution. With the 77 GHz FMCW radar-based altimeter presented in this paper not only the actual altitude over ground but also obstacles such as trees and bushes can be detected. The capability of this solution is verified by measurements over different terrain and vegetation.", "title": "" }, { "docid": "06ba81270357c9bcf1dd8f1871741537", "text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using", "title": "" }, { "docid": "e85e66b6ad6324a07ca299bf4f3cd447", "text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.", "title": "" }, { "docid": "149ffd270f39a330f4896c7d3aa290be", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "68420190120449343006879e23be8789", "text": "Recent findings suggest that consolidation of emotional memories is influenced by menstrual phase in women. In contrast to other phases, in the mid-luteal phase when progesterone levels are elevated, cortisol levels are increased and correlated with emotional memory. This study examined the impact of progesterone on cortisol and memory consolidation of threatening stimuli under stressful conditions. Thirty women were recruited for the high progesterone group (in the mid-luteal phase) and 26 for the low progesterone group (in non-luteal phases of the menstrual cycle). Women were shown a series of 20 neutral or threatening images followed immediately by either a stressor (cold pressor task) or control condition. Participants returned two days later for a surprise free recall test of the images and salivary cortisol responses were monitored. High progesterone levels were associated with higher baseline and stress-evoked cortisol levels, and enhanced memory of negative images when stress was received. A positive correlation was found between stress-induced cortisol levels and memory recall of threatening images. These findings suggest that progesterone mediates cortisol responses to stress and subsequently predicts memory recall for emotionally arousing stimuli.", "title": "" }, { "docid": "1bea3fdeb0ca47045a64771bd3925e11", "text": "The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context. Traditional supervised methods only use labeled data (context), while missing rich lexical knowledge such as the gloss which defines the meaning of a word sense. Recent studies have shown that incorporating glosses into neural networks for WSD has made significant improvement. However, the previous models usually build the context representation and gloss representation separately. In this paper, we find that the learning for the context and gloss representation can benefit from each other. Gloss can help to highlight the important words in the context, thus building a better context representation. Context can also help to locate the key words in the gloss of the correct word sense. Therefore, we introduce a co-attention mechanism to generate co-dependent representations for the context and gloss. Furthermore, in order to capture both word-level and sentence-level information, we extend the attention mechanism in a hierarchical fashion. Experimental results show that our model achieves the state-of-the-art results on several standard English all-words WSD test datasets.", "title": "" }, { "docid": "2acb16f1e67f141220dc05b90ac23385", "text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.", "title": "" }, { "docid": "df679dcd213842a786c1ad9587c66f77", "text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in", "title": "" }, { "docid": "9c857daee24f793816f1cee596e80912", "text": "Introduction Since the introduction of a new UK Ethics Committee Authority (UKECA) in 2004 and the setting up of the Central Office for Research Ethics Committees (COREC), research proposals have come under greater scrutiny than ever before. The era of self-regulation in UK research ethics has ended (Kerrison and Pollock, 2005). The UKECA recognise various committees throughout the UK that can approve proposals for research in NHS facilities (National Patient Safety Agency, 2007), and the scope of research for which approval must be sought is defined by the National Research Ethics Service, which has superceded COREC. Guidance on sample size (Central Office for Research Ethics Committees, 2007: 23) requires that 'the number should be sufficient to achieve worthwhile results, but should not be so high as to involve unnecessary recruitment and burdens for participants'. It also suggests that formal sample estimation size should be based on the primary outcome, and that if there is more than one outcome then the largest sample size should be chosen. Sample size is a function of three factors – the alpha level, beta level and magnitude of the difference (effect size) hypothesised. Referring to the expected size of effect, COREC (2007: 23) guidance states that 'it is important that the difference is not unrealistically high, as this could lead to an underestimate of the required sample size'. In this paper, issues of alpha, beta and effect size will be considered from a practical perspective. A freely-available statistical software package called GPower (Buchner et al, 1997) will be used to illustrate concepts and provide practical assistance to novitiate researchers and members of research ethics committees. There are a wide range of freely available statistical software packages, such as PS (Dupont and Plummer, 1997) and STPLAN (Brown et al, 2000). Each has features worth exploring, but GPower was chosen because of its ease of use and the wide range of study designs for which it caters. Using GPower, sample size and power can be estimated or checked by those with relatively little technical knowledge of statistics. Alpha and beta errors and power Researchers begin with a research hypothesis – a 'hunch' about the way that the world might be. For example, that treatment A is better than treatment B. There are logical reasons why this can never be demonstrated as absolutely true, but evidence that it may or may not be true can be obtained by …", "title": "" }, { "docid": "6d329c1fa679ac201387c81f59392316", "text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.", "title": "" }, { "docid": "b0eec6d5b205eafc6fcfc9710e9cf696", "text": "The reflectarray antenna is a substitution of reflector antennas by making use of planar phased array techniques [1]. The array elements are specially designed, providing proper phase compensations to the spatial feed through various techniques [2–4]. The bandwidth limitation due to microstrip structures has led to various multi-band designs [5–6]. In these designs, the multi-band performance is realized through multi-layer structures, causing additional volume requirement and fabrication cost. An alternative approach is provided in [7–8], where single-layer structures are adopted. The former [7] implements a dual-band linearly polarized reflectarray whereas the latter [8] establishes a single-layer tri-band concept with circular polarization (CP). In this paper, a prototype based on the conceptual structure in [8] is designed, fabricated, and measured. The prototype is composed of three sub-arrays on a single layer. They have pencil beam patterns at 32 GHz (Ka-band), 8.4 GHz (X-band), and 7.1 GHz (C-band), respectively. Considering the limited area, two phase compensation techniques are adopted by these sub-arrays. The varied element size (VES) technique is applied to the C-band, whereas the element rotation (ER) technique is used in both X-band and Ka-band.", "title": "" }, { "docid": "42db85c2e0e243c5e31895cfc1f03af6", "text": "This survey presents recent progress on Affective Computing (AC) using mobile devices. AC has been one of the most active research topics for decades. The primary limitation of traditional AC research refers to as impermeable emotions. This criticism is prominent when emotions are investigated outside social contexts. It is problematic because some emotions are directed at other people and arise from interactions with them. The development of smart mobile wearable devices (e.g., Apple Watch, Google Glass, iPhone, Fitbit) enables the wild and natural study for AC in the aspect of computer science. This survey emphasizes the AC study and system using smart wearable devices. Various models, methodologies and systems are discussed in order to examine the state of the art. Finally, we discuss remaining challenges and future works.", "title": "" }, { "docid": "0506a05ff43ae7590809015bfb37cf01", "text": "The balanced business scorecard is a widely-used management framework for optimal measurement of organizational performance. Explains that the scorecard originated in an attempt to address the problem of systems apparently not working. However, the problem proved to be less the information systems than the broader organizational systems, specifically business performance measurement. Discusses the fundamental points to cover in implementation of the scorecard. Presents ten “golden rules” developed as a means of bringing the framework closer to practical application. The Nolan Norton Institute developed the balanced business scorecard in 1990, resulting in the much-referenced Harvard Business Review article, “Measuring performance in the organization of the future”, by Robert Kaplan and David Norton. The balanced scorecard supplemented traditional financial measures with three additional perspectives: customers, internal business processes and learning and growth. Currently, the balanced business scorecard is a powerful and widely-accepted framework for defining performance measures and communicating objectives and vision to the organization. Many companies around the world have worked with the balanced business scorecard but experiences vary. Based on practical experiences of clients of Nolan, Norton & Co. and KPMG in putting the balanced business scorecard to work, the following ten golden rules for its implementation have been determined: 1 There are no standard solutions: all businesses differ. 2 Top management support is essential. 3 Strategy is the starting point. 4 Determine a limited and balanced number of objectives and measures. 5 No in-depth analyses up front, but refine and learn by doing. 6 Take a bottom-up and top-down approach. 7 It is not a systems issue, but systems are an issue. 8 Consider delivery systems at the start. 9 Consider the effect of performance indicators on behaviour. 10 Not all measures can be quantified.", "title": "" } ]
scidocsrr
abbd4694897bb5c4fd5866f00de2d593
Aesthetics and credibility in web site design
[ { "docid": "e7c8abf3387ba74ca0a6a2da81a26bc4", "text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "36a615660b8f0c60bef06b5a57887bd1", "text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of  quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.", "title": "" }, { "docid": "dfa5334f77bba5b1eeb42390fed1bca3", "text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.", "title": "" }, { "docid": "bf08d673b40109d6d6101947258684fd", "text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.", "title": "" }, { "docid": "f285815e47ea0613fb1ceb9b69aee7df", "text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.", "title": "" }, { "docid": "aa418cfd93eaba0d47084d0b94be69b8", "text": "Single-trial classification of Event-Related Potentials (ERPs) is needed in many real-world brain-computer interface (BCI) applications. However, because of individual differences, the classifier needs to be calibrated by using some labeled subject specific training samples, which may be inconvenient to obtain. In this paper we propose a weighted adaptation regularization (wAR) approach for offline BCI calibration, which uses data from other subjects to reduce the amount of labeled data required in offline single-trial classification of ERPs. Our proposed model explicitly handles class-imbalance problems which are common in many real-world BCI applications. War can improve the classification performance, given the same number of labeled subject-specific training samples, or, equivalently, it can reduce the number of labeled subject-specific training samples, given a desired classification accuracy. To reduce the computational cost of wAR, we also propose a source domain selection (SDS) approach. Our experiments show that wARSDS can achieve comparable performance with wAR but is much less computationally intensive. We expect wARSDS to find broad applications in offline BCI calibration.", "title": "" }, { "docid": "35b82263484452d83519c68a9dfb2778", "text": "S Music and the Moving Image Conference May 27th 29th, 2016 1. Loewe Friday, May 27, 2016, 9:30AM – 11:00AM MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING Nancy Allen, Film Music Editor While the technical aspects of music editing and film-making continue to evolve, the fundamental nature of story-telling remains the same. Ideally, the role of the music editor exists at an intersection between the Composer, Director, and Picture Editor, where important creative decisions are made. This privileged position allows the Music Editor to better explore how to tell the story through music and bring the evolving vision of the film into tighter focus. 2. Loewe Friday, May 27, 2016, 11:30 AM – 1:00 PM GREAT EXPECTATIONS? THE CHANGING ROLE OF AUDIOVISUAL INCONGRUENCE IN CONTEMPORARY MULTIMEDIA Dave Ireland, University of Leeds Film-music moments that are perceived to be incongruent, misfitting or inappropriate have often been described as highly memorable. These claims can in part be explained by the separate processing of sonic and visual information that can occur when incongruent combinations subvert expectations of an audiovisual pairing in which the constituent components share a greater number of properties. Drawing upon a sequence from the TV sitcom Modern Family in which images of violent destruction are juxtaposed with performance of tranquil classical music, this paper highlights the increasing prevalence of such uses of audiovisual difference in contemporary multimedia. Indeed, such principles even now underlie a form of Internet meme entitled ‘Whilst I play unfitting music’. Such examples serve to emphasize the evolving functions of incongruence, emphasizing the ways in which such types of audiovisual pairing now also serve as a marker of authorial style and a source of intertextual parody. Drawing upon psychological theories of expectation and ideas from semiotics that facilitate consideration of the potential disjunction between authorial intent and perceiver response, this paper contends that such forms of incongruence should be approached from a psycho-semiotic perspective. Through consideration of the aforementioned examples, it will be demonstrated that this approach allows for: more holistic understanding of evolving expectations and attitudes towards audiovisual incongruence that may shape perceiver response; and a more nuanced mode of analyzing factors that may influence judgments of film-music fit and appropriateness. MUSICAL META-MORPHOSIS: BREAKING THE FOURTH WALL THROUGH DIEGETIC-IZING AND METACAESURA Rebecca Eaton, Texas State University In “The Fantastical Gap,” Stilwell suggests that metadiegetic music—which puts the audience “inside a character’s head”— begets such a strong spectator bond that it becomes “a kind of musical ‘direct address,’ threatening to break the fourth wall that is the screen.” While Stillwell theorizes a breaking of the fourth wall through audience over-identification, in this paper I define two means of film music transgression that potentially unsuture an audience, exposing film qua film: “diegetic-izing” and “metacaesura.” While these postmodern techniques 1) reveal film as a constructed artifact, and 2) thus render the spectator a more, not less, “troublesome viewing subject,” my analyses demonstrate that these breaches of convention still further the narrative aims of their respective films. Both Buhler and Stilwell analyze music that gradually dissolves from non-diegetic to diegetic. “Diegeticizing” unexpectedly reveals what was assumed to be nondiegetic as diegetic, subverting Gorbman’s first principle of invisibility. In parodies including Blazing Saddles and Spaceballs, this reflexive uncloaking plays for laughs. The Truman Show and the Hunger Games franchise skewer live soundtrack musicians and timpani—ergo, film music itself—as tools of emotional manipulation or propaganda. “Metacaesura” serves as another means of breaking the fourth wall. Metacaesura arises when non-diegetic music cuts off in media res. While diegeticizing renders film music visible, metacaesura renders it audible (if only in hindsight). In Honda’s “Responsible You,” Pleasantville, and The Truman Show, the dramatic cessation of nondiegetic music compels the audience to acknowledge the constructedness of both film and their own worlds. Partial Bibliography Brown, Tom. Breaking the Fourth Wall: Direct Address in the Cinema. Edinburgh: Edinburgh University Press, 2012. Buhler, James. “Analytical and Interpretive Approaches to Film Music (II): Interpreting Interactions of Music and Film.” In Film Music: An Anthology of Critical Essays, edited by K.J. Donnelly, 39-61. Edinburgh University Press, 2001. Buhler, James, Anahid Kassabian, David Neumeyer, and Robynn Stillwell. “Roundtable on Film Music.” Velvet Light Trap 51 (Spring 2003): 73-91. Buhler, James, Caryl Flinn, and David Neumeyer, eds. Music and Cinema. Hanover: Wesleyan/University Press of New England, 2000. Eaton, Rebecca M. Doran. “Unheard Minimalisms: The Function of the Minimalist Technique in Film Scores.” PhD diss., The University of Texas at Austin, 2008. Gorbman, Claudia. Unheard Melodies: Narrative Film Music. Bloomington: University of Indiana Press, 1987. Harries, Dan. Film Parody. London: British Film Institute, 2000. Kassabian, Anahid. Hearing Film: Tracking Identifications in Contemporary Hollywood Film Music. New York: Routledge, 2001. Neumeyer, David. “Diegetic/nondiegetic: A Theoretical Model.” Music and the Moving Image 2.1 (2009): 26–39. Stilwell, Robynn J. “The Fantastical Gap Between Diegetic and Nondiegetic.” In Beyond the Soundtrack, edited by Daniel Goldmark, Lawrence Kramer, and Richard Leppert, 184202. Berkeley: The University of California Press, 2007. REDEFINING PERSPECTIVE IN ATONEMENT: HOW MUSIC SET THE STAGE FOR MODERN MEDIA CONSUMPTION Lillie McDonough, New York University One of the most striking narrative devices in Joe Wright’s film adaptation of Atonement (2007) is in the way Dario Marianelli’s original score dissolves the boundaries between diagetic and non-diagetic music at key moments in the drama. I argue that these moments carry us into a liminal state where the viewer is simultaneously in the shoes of a first person character in the world of the film and in the shoes of a third person viewer aware of the underscore as a hallmark of the fiction of a film in the first place. This reflects the experience of Briony recalling the story, both as participant and narrator, at the metalevel of the audience. The way the score renegotiates the customary musical playing space creates a meta-narrative that resembles one of the fastest growing forms of digital media of today: videogames. At their core, video games work by placing the player in a liminal state of both a viewer who watches the story unfold and an agent who actively takes part in the story’s creation. In fact, the growing trend towards hyperrealism and virtual reality intentionally progressively erodes the boundaries between the first person agent in real the world and agent on screen in the digital world. Viewed through this lens, the philosophy behind the experience of Atonement’s score and sound design appears to set the stage for way our consumption of media has developed since Atonement’s release in 2007. Mainly, it foreshadows and highlights a prevalent desire to progressively blur the lines between media and life. 3. Room 303, Friday, May 27, 2016, 11:30 AM – 1:00 PM HOLLYWOOD ORCHESTRATORS AND GHOSTWRITERS OF THE 1960s AND 1970s: THE CASE OF MOACIR SANTOS Lucas Bonetti, State University of Campinas In Hollywood in the 1960s and 1970s, freelance film composers trying to break into the market saw ghostwriting as opportunities to their professional networks. Meanwhile, more renowned composers saw freelancers as means of easing their work burdens. The phenomenon was so widespread that freelancers even sometimes found themselves ghostwriting for other ghostwriters. Ghostwriting had its limitations, though: because freelancers did not receive credit, they could not grow their resumes. Moreover, their music often had to follow such strict guidelines that they were not able to showcase their own compositional voices. Being an orchestrator raised fewer questions about authorship, and orchestrators usually did not receive credit for their work. Typically, composers provided orchestrators with detailed sketches, thereby limiting their creative possibilities. This story would suggest that orchestrators were barely more than copyists—though with more intense workloads. This kind of thankless work was especially common in scoring for episodic television series of the era, where the fast pace of the industry demanded more agility and productivity. Brazilian composer Moacir Santos worked as a Hollywood ghostwriter and orchestrator starting in 1968. His experiences exemplify the difficulties of these professions during this era. In this paper I draw on an interview-based research I conducted in the Los Angeles area to show how Santos’s experiences showcase the difficulties of being a Hollywood outsider at the time. In particular, I examine testimony about racial prejudice experienced by Santos, and how misinformation about his ghostwriting activity has led to misunderstandings among scholars about his contributions. SING A SONG!: CHARITY BAILEY AND INTERRACIAL MUSIC EDUCATION ON 1950s NYC TELEVISION Melinda Russell, Carleton College Rhode Island native Charity Bailey (1904-1978) helped to define a children’s music market in print and recordings; in each instance the contents and forms she developed are still central to American children’s musical culture and practice. After study at Juilliard and Dalcroze, Bailey taught music at the Little Red School House in Greenwich Village from 1943-1954, where her students included Mary Travers and Eric Weissberg. Bailey’s focus on African, African-American, and Car", "title": "" }, { "docid": "bdfb48fcd7ef03d913a41ca8392552b6", "text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.", "title": "" }, { "docid": "dd51e9bed7bbd681657e8742bb5bf280", "text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a", "title": "" }, { "docid": "ed0d2151f5f20a233ed8f1051bc2b56c", "text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.", "title": "" }, { "docid": "852c85ecbed639ea0bfe439f69fff337", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "30db2040ab00fd5eec7b1eb08526f8e8", "text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.", "title": "" }, { "docid": "19f604732dd88b01e1eefea1f995cd54", "text": "Power electronic transformer (PET) technology is one of the promising technology for medium/high power conversion systems. With the cutting-edge improvements in the power electronics and magnetics, makes it possible to substitute conventional line frequency transformer traction (LFTT) technology with the PET technology. Over the past years, research and field trial studies are conducted to explore the technical challenges associated with the operation, functionalities, and control of PET-based traction systems. This paper aims to review the essential requirements, technical challenges, and the existing state of the art of PET traction system architectures. Finally, this paper discusses technical considerations and introduces the new research possibilities especially in the power conversion stages, PET design, and the power switching devices.", "title": "" }, { "docid": "d9950f75380758d0a0f4fd9d6e885dfd", "text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.", "title": "" }, { "docid": "b1e2326ebdf729e5b55822a614b289a9", "text": "The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.", "title": "" }, { "docid": "4a72f9b04ba1515c0d01df0bc9b60ed7", "text": "Distributed generators (DGs) sometimes provide the lowest cost solution to handling low-voltage or overload problems. In conjunction with handling such problems, a DG can be placed for optimum efficiency or optimum reliability. Such optimum placements of DGs are investigated. The concept of segments, which has been applied in previous reliability studies, is used in the DG placement. The optimum locations are sought for time-varying load patterns. It is shown that the circuit reliability is a function of the loading level. The difference of DG placement between optimum efficiency and optimum reliability varies under different load conditions. Observations and recommendations concerning DG placement for optimum reliability and efficiency are provided in this paper. Economic considerations are also addressed.", "title": "" }, { "docid": "91bf842f809dd369644ffd2b10b9c099", "text": "We tackle the problem of multi-label classification of fashion images, learning from noisy data with minimal human supervision. We present a new dataset of full body poses, each with a set of 66 binary labels corresponding to the information about the garments worn in the image obtained in an automatic manner. As the automatically-collected labels contain significant noise, we manually correct the labels for a small subset of the data, and use these correct labels for further training and evaluation. We build upon a recent approach that both cleans the noisy labels and learns to classify, and introduce simple changes that can significantly improve the performance.", "title": "" }, { "docid": "4fea653dd0dd8cb4ac941b2368ceb78f", "text": "During present study the antibacterial activity of black pepper (Piper nigrum Linn.) and its mode of action on bacteria were done. The extracts of black pepper were evaluated for antibacterial activity by disc diffusion method. The minimum inhibitory concentration (MIC) was determined by tube dilution method and mode of action was studied on membrane leakage of UV260 and UV280 absorbing material spectrophotometrically. The diameter of the zone of inhibition against various Gram positive and Gram negative bacteria was measured. The MIC was found to be 50-500ppm. Black pepper altered the membrane permeability resulting the leakage of the UV260 and UV280 absorbing material i.e., nucleic acids and proteins into the extra cellular medium. The results indicate excellent inhibition on the growth of Gram positive bacteria like Staphylococcus aureus, followed by Bacillus cereus and Streptococcus faecalis. Among the Gram negative bacteria Pseudomonas aeruginosa was more susceptible followed by Salmonella typhi and Escherichia coli.", "title": "" }, { "docid": "e812bed02753b807d1e03a2e05e87cb8", "text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.", "title": "" }, { "docid": "17611b0521b69ad2b22eeadc10d6d793", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" } ]
scidocsrr
0cdb5afc9455ba4b14067708656b9a4a
Design of Power-Rail ESD Clamp Circuit With Ultra-Low Standby Leakage Current in Nanoscale
[ { "docid": "7af416164218d6ccb1d9772b77a5cd5c", "text": "Considering gate-oxide reliability, a new electrostatic discharge (ESD) protection scheme with an on-chip ESD bus (ESD_BUS) and a high-voltage-tolerant ESD clamp circuit for 1.2/2.5 V mixed-voltage I/O interfaces is proposed. The devices used in the high-voltage-tolerant ESD clamp circuit are all 1.2 V low-voltage N- and P-type MOS devices that can be safely operated under the 2.5-V bias conditions without suffering from the gate-oxide reliability issue. The four-mode (positive-to-VSS, negative-to-VSS, positive-to-VDD, and negative-to-VDD) ESD stresses on the mixed-voltage I/O pad and pin-to-pin ESD stresses can be effectively discharged by the proposed ESD protection scheme. The experimental results verified in a 0.13-mum CMOS process have confirmed that the proposed new ESD protection scheme has high human-body model (HBM) and machine-model (MM) ESD robustness with a fast turn-on speed. The proposed new ESD protection scheme, which is designed with only low- voltage devices, is an excellent and cost-efficient solution to protect mixed-voltage I/O interfaces.", "title": "" } ]
[ { "docid": "c6810bcd06378091799af4210f4f8573", "text": "F or years, business academics and practitioners have operated in the belief that sustained competitive advantage could accrue from a variety of industrylevel entry barriers, such as technological supremacy, patent protections, and government regulations. However, technological change and diffusion, rapid innovation, and deregulation have eroded these widely recognized barriers. In today’s environment, which requires flexibility, innovation, and speed-to-market, effectively developing and managing employees’ knowledge, experiences, skills, and expertise—collectively defined as “human capital”—has become a key success factor for sustained organizational performance.", "title": "" }, { "docid": "710e81da55d50271b55ac9a4f2d7f986", "text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "223470104e0ca1b04df1955df5afaa63", "text": "Wine is the product of complex interactions between fungi, yeasts and bacteria that commence in the vineyard and continue throughout the fermentation process until packaging. Although grape cultivar and cultivation provide the foundations of wine flavour, microorganisms, especially yeasts, impact on the subtlety and individuality of the flavour response. Consequently, it is important to identify and understand the ecological interactions that occur between the different microbial groups, species and strains. These interactions encompass yeast-yeast, yeast-filamentous fungi and yeast-bacteria responses. The surface of healthy grapes has a predominance of Aureobasidium pullulans, Metschnikowia, Hanseniaspora (Kloeckera), Cryptococcus and Rhodotorula species depending on stage of maturity. This microflora moderates the growth of spoilage and mycotoxigenic fungi on grapes, the species and strains of yeasts that contribute to alcoholic fermentation, and the bacteria that contribute to malolactic fermentation. Damaged grapes have increased populations of lactic and acetic acid bacteria that impact on yeasts during alcoholic fermentation. Alcoholic fermentation is characterised by the successional growth of various yeast species and strains, where yeast-yeast interactions determine the ecology. Through yeast-bacterial interactions, this ecology can determine progression of the malolactic fermentation, and potential growth of spoilage bacteria in the final product. The mechanisms by which one species/strain impacts on another in grape-wine ecosystems include: production of lytic enzymes, ethanol, sulphur dioxide and killer toxin/bacteriocin like peptides; nutrient depletion including removal of oxygen, and production of carbon dioxide; and release of cell autolytic components. Cell-cell communication through quorum sensing molecules needs investigation.", "title": "" }, { "docid": "20a484c01402cdc464cf0b46e577686e", "text": "Healthcare costs have increased dramatically and the demand for highquality care will only grow in our aging society. At the same time,more event data are being collected about care processes. Healthcare Information Systems (HIS) have hundreds of tables with patient-related event data. Therefore, it is quite natural to exploit these data to improve care processes while reducing costs. Data science techniqueswill play a crucial role in this endeavor. Processmining can be used to improve compliance and performance while reducing costs. The chapter sets the scene for process mining in healthcare, thus serving as an introduction to this SpringerBrief.", "title": "" }, { "docid": "3dd518c87372b51a9284e4b8aa2e4fb4", "text": "Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static structures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio- temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background modeling and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic texture histograms which combine spatial texture and temporal motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes.", "title": "" }, { "docid": "7afe5c6affbaf30b4af03f87a018a5b3", "text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.", "title": "" }, { "docid": "6a128aa00edaf147df327e7736eeb4c9", "text": "Query segmentation is essential to query processing. It aims to tokenize query words into several semantic segments and help the search engine to improve the precision of retrieval. In this paper, we present a novel unsupervised learning approach to query segmentation based on principal eigenspace similarity of queryword-frequency matrix derived from web statistics. Experimental results show that our approach could achieve superior performance of 35.8% and 17.7% in Fmeasure over the two baselines respectively, i.e. MI (Mutual Information) approach and EM optimization approach.", "title": "" }, { "docid": "92683433c212b8d9afc85f5ed2b88999", "text": "Language Models (LMs) for Automatic Speech Recognition (ASR) are typically trained on large text corpora from news articles, books and web documents. These types of corpora, however, are unlikely to match the test distribution of ASR systems, which expect spoken utterances. Therefore, the LM is typically adapted to a smaller held-out in-domain dataset that is drawn from the test distribution. We propose three LM adaptation approaches for Deep NN and Long Short-Term Memory (LSTM): (1) Adapting the softmax layer in the Neural Network (NN); (2) Adding a non-linear adaptation layer before the softmax layer that is trained only in the adaptation phase; (3) Training the extra non-linear adaptation layer in pre-training and adaptation phases. Aiming to improve upon a hierarchical Maximum Entropy (MaxEnt) second-pass LM baseline, which factors the model into word-cluster and word models, we build an NN LM that predicts only word clusters. Adapting the LSTM LM by training the adaptation layer in both training and adaptation phases (Approach 3), we reduce the cluster perplexity by 30% on a held-out dataset compared to an unadapted LSTM LM. Initial experiments using a state-of-the-art ASR system show a 2.3% relative reduction in WER on top of an adapted MaxEnt LM.", "title": "" }, { "docid": "7c1b3ba1b8e33ed866ae90b3ddf80ce6", "text": "This paper presents a universal tuning system for harmonic operation of series-resonant inverters (SRI), based on a self-oscillating switching method. In the new tuning system, SRI can instantly operate in one of the switching frequency harmonics, e.g., the first, third, or fifth harmonic. Moreover, the new system can utilize pulse density modulation (PDM), phase shift (PS), and power–frequency control methods for each harmonic. Simultaneous combination of PDM and PS control method is also proposed for smoother power regulation. In addition, this paper investigates performance of selected harmonic operation based on phase-locked loop (PLL) circuits. In comparison with the fundamental harmonic operation, PLL circuits suffer from stability problem for the other harmonic operations. The proposed method has been verified using laboratory prototypes with resonant frequencies of 20 up to 75 kHz and output power of about 200 W.", "title": "" }, { "docid": "901fbd46cdd4403c8398cb21e1c75ba1", "text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.", "title": "" }, { "docid": "cfd0cadbdf58ee01095aea668f0da4fe", "text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.", "title": "" }, { "docid": "36e4c1d930ea33962a51f293e4c3a60e", "text": "Address Space Layout Randomization (ASLR) can increase the cost of exploiting memory corruption vulnerabilities. One major weakness of ASLR is that it assumes the secrecy of memory addresses and is thus ineffective in the face of memory disclosure vulnerabilities. Even fine-grained variants of ASLR are shown to be ineffective against memory disclosures. In this paper we present an approach that synchronizes randomization with potential runtime disclosure. By applying rerandomization to the memory layout of a process every time it generates an output, our approach renders disclosures stale by the time they can be used by attackers to hijack control flow. We have developed a fully functioning prototype for x86_64 C programs by extending the Linux kernel, GCC, and the libc dynamic linker. The prototype operates on C source code and recompiles programs with a set of augmented information required to track pointer locations and support runtime rerandomization. Using this augmented information we dynamically relocate code segments and update code pointer values during runtime. Our evaluation on the SPEC CPU2006 benchmark, along with other applications, show that our technique incurs a very low performance overhead (2.1% on average).", "title": "" }, { "docid": "8b84dc47c6a9d39ef1d094aa173a954c", "text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.", "title": "" }, { "docid": "1288abeaddded1564b607c9f31924697", "text": "Dynamic time warping (DTW) is used for the comparison and processing of nonlinear signals and constitutes a widely researched field of study. The method has been initially designed for, and applied to, signals representing audio data. Afterwords it has been successfully modified and applied to many other fields of study. In this paper, we present the results of researches on the generalized DTW method designed for use with rotational sets of data parameterized by quaternions. The need to compare and process quaternion time series has been gaining in importance recently. Three-dimensional motion data processing is one of the most important applications here. Specifically, it is applied in the context of motion capture, and in many cases all rotational signals are described in this way. We propose a construction of generalized method called quaternion dynamic time warping (QDTW), which makes use of specific properties of quaternion space. It allows for the creation of a family of algorithms that deal with the higher order features of the rotational trajectory. This paper focuses on the analysis of the properties of this new approach. Numerical results show that the proposed method allows for efficient element assignment. Moreover, when used as the measure of similarity for a clustering task, the method helps to obtain good clustering performance both for synthetic and real datasets.", "title": "" }, { "docid": "ce83a16a6ccce5ccc58577b25ab33788", "text": "In this paper, we address the problem of automatically extracting disease-symptom relationships from health question-answer forums due to its usefulness for medical question answering system. To cope with the problem, we divide our main task into two subtasks since they exhibit different challenges: (1) disease-symptom extraction across sentences, (2) disease-symptom extraction within a sentence. For both subtasks, we employed machine learning approach leveraging several hand-crafted features, such as syntactic features (i.e., information from part-of-speech tags) and pre-trained word vectors. Furthermore, we basically formulate our problem as a binary classification task, in which we classify the \"indicating\" relation between a pair of Symptom and Disease entity. To evaluate the performance, we also collected and annotated corpus containing 463 pairs of question-answer threads from several Indonesian health consultation websites. Our experiment shows that, as our expected, the first subtask is relatively more difficult than the second subtask. For the first subtask, the extraction of disease-symptom relation only achieved 36% in terms of F1 measure, while the second one was 76%. To the best of our knowledge, this is the first work addressing such relation extraction task for both \"across\" and \"within\" sentence, especially in Indonesia.", "title": "" }, { "docid": "2def5b7bb42a5b3b2eec57ff5dfc2da0", "text": "Deepened periodontal pockets exert a significant pathological burden on the host and its immune system, particularly in a patient with generalized moderate to severe periodontitis. This burden is extensive and longitudinal, occurring over decades of disease development. Considerable diagnostic and prognostic successes in this regard have come from efforts to measure the depths of the pockets and their contents, including level of inflammatory mediators, cellular exudates and microbes; however, the current standard of care for measuring these pockets, periodontal probing, is an analog technology in a digital age. Measurements obtained by probing are variable, operator dependent and influenced by site-specific factors. Despite these limitations, manual probing is still the standard of care for periodontal diagnostics globally. However, it is becoming increasingly clear that this technology needs to be updated to be compatible with the digital technologies currently being used to image other orofacial structures, such as maxillary sinuses, alveolar bone, nerve foramina and endodontic canals in 3 dimensions. This review aims to summarize the existing technology, as well as new imaging strategies that could be utilized for accurate evaluation of periodontal pocket dimensions.", "title": "" }, { "docid": "c4332dfb8e8117c3deac7d689b8e259b", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" }, { "docid": "59da726302c06abef243daee87cdeaa7", "text": "The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes B Maria Paola Paladino mariapaola.paladino@unitn.it Francesco Ferrari francesco.ferrari-1@unitn.it Jolanda Jetten j.jetten@psy.uq.edu.au 1 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy 2 School of Psychology, The University of Queensland, St Lucia, QLD 4072, Australia underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.", "title": "" }, { "docid": "7e800094f52080194d94bdedf1d92b9c", "text": "IMPORTANCE\nHealth care-associated infections (HAIs) account for a large proportion of the harms caused by health care and are associated with high costs. Better evaluation of the costs of these infections could help providers and payers to justify investing in prevention.\n\n\nOBJECTIVE\nTo estimate costs associated with the most significant and targetable HAIs.\n\n\nDATA SOURCES\nFor estimation of attributable costs, we conducted a systematic review of the literature using PubMed for the years 1986 through April 2013. For HAI incidence estimates, we used the National Healthcare Safety Network of the Centers for Disease Control and Prevention (CDC).\n\n\nSTUDY SELECTION\nStudies performed outside the United States were excluded. Inclusion criteria included a robust method of comparison using a matched control group or an appropriate regression strategy, generalizable populations typical of inpatient wards and critical care units, methodologic consistency with CDC definitions, and soundness of handling economic outcomes.\n\n\nDATA EXTRACTION AND SYNTHESIS\nThree review cycles were completed, with the final iteration carried out from July 2011 to April 2013. Selected publications underwent a secondary review by the research team.\n\n\nMAIN OUTCOMES AND MEASURES\nCosts, inflated to 2012 US dollars.\n\n\nRESULTS\nUsing Monte Carlo simulation, we generated point estimates and 95% CIs for attributable costs and length of hospital stay. On a per-case basis, central line-associated bloodstream infections were found to be the most costly HAIs at $45,814 (95% CI, $30,919-$65,245), followed by ventilator-associated pneumonia at $40,144 (95% CI, $36,286-$44,220), surgical site infections at $20,785 (95% CI, $18,902-$22,667), Clostridium difficile infection at $11,285 (95% CI, $9118-$13,574), and catheter-associated urinary tract infections at $896 (95% CI, $603-$1189). The total annual costs for the 5 major infections were $9.8 billion (95% CI, $8.3-$11.5 billion), with surgical site infections contributing the most to overall costs (33.7% of the total), followed by ventilator-associated pneumonia (31.6%), central line-associated bloodstream infections (18.9%), C difficile infections (15.4%), and catheter-associated urinary tract infections (<1%).\n\n\nCONCLUSIONS AND RELEVANCE\nWhile quality improvement initiatives have decreased HAI incidence and costs, much more remains to be done. As hospitals realize savings from prevention of these complications under payment reforms, they may be more likely to invest in such strategies.", "title": "" }, { "docid": "f8f8c96e6abede6bc226a0c9f171e9e1", "text": "Simulation is the research tool of choice for a majority of the mobile ad hoc network (MANET) community. However, while the use of simulation has increased, the credibility of the simulation results has decreased. To determine the state of MANET simulation studies, we surveyed the 2000-2005 proceedings of the ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). From our survey, we found significant shortfalls. We present the results of our survey in this paper. We then summarize common simulation study pitfalls found in our survey. Finally, we discuss the tools available that aid the development of rigorous simulation studies. We offer these results to the community with the hope of improving the credibility of MANET simulation-based studies.", "title": "" } ]
scidocsrr
4bdc7f25ba00efc2f132798402bbb89b
Predicting Age Range of Users over Microblog Dataset
[ { "docid": "ebc107147884d89da4ef04eba2d53a73", "text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.", "title": "" } ]
[ { "docid": "35ce8c11fa7dd22ef0daf9d0bd624978", "text": "Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a binary classification task, where each region is independently classified using local information. In this paper, we show that jointly predicting OOV regions, and including contextual information from each region, leads to substantial improvement in OOV detection. Compared to the state-of-the-art, we reduce the missed OOV rate from 42.6% to 28.4% at 10% false alarm rate.", "title": "" }, { "docid": "fd76b7a11f8e071ebe045997ee598bbb", "text": "γ-Aminobutyric acid (GABA) has high physiological activity in plant stress physiology. This study showed that the application of exogenous GABA by root drenching to moderately (MS, 150 mM salt concentration) and severely salt-stressed (SS, 300 mM salt concentration) plants significantly increased endogenous GABA concentration and improved maize seedling growth but decreased glutamate decarboxylase (GAD) activity compared with non-treated ones. Exogenous GABA alleviated damage to membranes, increased in proline and soluble sugar content in leaves, and reduced water loss. After the application of GABA, maize seedling leaves suffered less oxidative damage in terms of superoxide anion (O2·-) and malondialdehyde (MDA) content. GABA-treated MS and SS maize seedlings showed increased enzymatic antioxidant activity compared with that of untreated controls, and GABA-treated MS maize seedlings had a greater increase in enzymatic antioxidant activity than SS maize seedlings. Salt stress severely damaged cell function and inhibited photosynthesis, especially in SS maize seedlings. Exogenous GABA application could reduce the accumulation of harmful substances, help maintain cell morphology, and improve the function of cells during salt stress. These effects could reduce the damage to the photosynthetic system from salt stress and improve photosynthesis and chlorophyll fluorescence parameters. GABA enhanced the salt tolerance of maize seedlings.", "title": "" }, { "docid": "43d46b56cdf20c8b8b67831caddfe4db", "text": "This research addresses a challenging issue that is to recognize spoken Arabic letters, that are three letters of hijaiyah that have indentical pronounciation when pronounced by Indonesian speakers but actually has different makhraj in Arabic, the letters are sa, sya and tsa. The research uses Mel-Frequency Cepstral Coefficients (MFCC) based feature extraction and Artificial Neural Network (ANN) classification method. The result shows the proposed method obtain a good accuracy with an average acuracy is 92.42%, with recognition accuracy each letters (sa, sya, and tsa) prespectivly 92.38%, 93.26% and 91.63%.", "title": "" }, { "docid": "ecd4dd9d8807df6c8194f7b4c7897572", "text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.", "title": "" }, { "docid": "973426438175226bb46c39cc0a390d97", "text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.", "title": "" }, { "docid": "04c7d8265e8b41aee67e5b11b3bc4fa2", "text": "Stretchable microelectromechanical systems (MEMS) possess higher mechanical deformability and adaptability than devices based on conventional solid and flexible substrates, hence they are particularly desirable for biomedical, optoelectronic, textile and other innovative applications. The stretchability performance can be evaluated by the failure strain of the embedded routing and the strain applied to the elastomeric substrate. The routings are divided into five forms according to their geometry: straight; wavy; wrinkly; island-bridge; and conductive-elastomeric. These designs are reviewed and their resistance-to-failure performance is investigated. The failure modeling, numerical analysis, and fabrication of routings are presented. The current review concludes with the essential factors of the stretchable electrical routing for achieving high performance, including routing angle, width and thickness. The future challenges of device integration and reliability assessment of the stretchable routings are addressed.", "title": "" }, { "docid": "da816b4a0aea96feceefe22a67c45be4", "text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.", "title": "" }, { "docid": "aa749c00010e5391710738cc235c1c35", "text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "c7f38e2284ad6f1258fdfda3417a6e14", "text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.", "title": "" }, { "docid": "7a05f2c12c3db9978807eb7c082db087", "text": "This paper discusses the importance, the complexity and the challenges of mapping mobile robot’s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.", "title": "" }, { "docid": "eb271acef996a9ba0f84a50b5055953b", "text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup", "title": "" }, { "docid": "aba7cb0f5f50a062c42b6b51457eb363", "text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.", "title": "" }, { "docid": "9197a5d92bd19ad29a82679bb2a94285", "text": "Equation (1.1) expresses v0 as a convex combination of the neighbouring points v1, . . . , vk. In the simplest case k = 3, the weights λ1, λ2, λ3 are uniquely determined by (1.1) and (1.2) alone; they are the barycentric coordinates of v0 with respect to the triangle [v1, v2, v3], and they are positive. This motivates calling any set of non-negative weights satisfying (1.1–1.2) for general k, a set of coordinates for v0 with respect to v1, . . . , vk. There has long been an interest in generalizing barycentric coordinates to k-sided polygons with a view to possible multisided extensions of Bézier surfaces; see for example [8 ]. In this setting, one would normally be free to choose v1, . . . , vk to form a convex polygon but would need to allow v0 to be any point inside the polygon or on the polygon, i.e. on an edge or equal to a vertex. More recently, the need for such coordinates arose in methods for parameterization [2 ] and morphing [5 ], [6 ] of triangulations. Here the points v0, v1, . . . , vk will be vertices of a (planar) triangulation and so the point v0 will never lie on an edge of the polygon formed by v1, . . . , vk. If we require no particular properties of the coordinates, the problem is easily solved. Because v0 lies in the convex hull of v1, . . . , vk, there must exist at least one triangle T = [vi1 , vi2 , vi3 ] which contains v0, and so we can take λi1 , λi2 , λi3 to be the three barycentric coordinates of v0 with respect to T , and make the remaining coordinates zero. However, these coordinates depend randomly on the choice of triangle. An improvement is to take an average of such coordinates over certain covering triangles, as proposed in [2 ]. The resulting coordinates depend continuously on v0, v1, . . . , vk, yet still not smoothly. The", "title": "" }, { "docid": "95e2a5dfa0b5e8d8719ae86f17f6d653", "text": "Time series classification is an increasing research topic due to the vast amount of time series data that is being created over a wide variety of fields. The particularity of the data makes it a challenging task and different approaches have been taken, including the distance based approach. 1-NN has been a widely used method within distance based time series classification due to its simplicity but still good performance. However, its supremacy may be attributed to being able to use specific distances for time series within the classification process and not to the classifier itself. With the aim of exploiting these distances within more complex classifiers, new approaches have arisen in the past few years that are competitive or which outperform the 1-NN based approaches. In some cases, these new methods use the distance measure to transform the series into feature vectors, bridging the gap between time series and traditional classifiers. In other cases, the distances are employed to obtain a time series kernel and enable the use of kernel methods for time series classification. One of the main challenges is that a kernel function must be positive semi-definite, a matter that is also addressed within this review. The presented review includes a taxonomy of all those methods that aim to classify time series using a distance based approach, as well as a discussion of the strengths and weaknesses of each method.", "title": "" }, { "docid": "749e11a625e94ab4e1f03a74aa6b3ab2", "text": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.", "title": "" }, { "docid": "30b508c7b576c88705098ac18657664b", "text": "The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.", "title": "" }, { "docid": "adba3380818a72270aea9452d2b77af2", "text": "Web-based programming exercises are a useful way for students to practice and master essential concepts and techniques presented in introductory programming courses. Although these systems are used fairly widely, we have a limited understanding of how students use these systems, and what can be learned from the data collected by these systems.\n In this paper, we perform a preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities. We explore a number of interesting correlations in the data that confirm existing hypotheses. Finally, and perhaps most importantly, we demonstrate the effectiveness and future potential of systems like CloudCoder to help us study novice programmers.", "title": "" }, { "docid": "896eac4a4b782075119998ce6cfbf366", "text": "In recent years, sustainability has been a major focus of fashion business operations because fashion industry development causes harmful effects to the environment, both indirectly and directly. The sustainability of the fashion industry is generally based on several levels and this study focuses on investigating the optimal supplier selection problem for sustainable materials supply in fashion clothing production. Following the ground rule that sustainable development is based on the Triple Bottom Line (TBL), this paper has framed twelve criteria from the economic, environmental and social perspectives for evaluating suppliers. The well-established multi-criteria decision making tool Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is employed for ranking potential suppliers among the pool of suppliers. Through a real case study, the proposed approach has been applied and some managerial implications are derived.", "title": "" }, { "docid": "213acf777983f4339d6ee25a4467b1be", "text": "RoadGraph is a graph based environmental model for driver assistance systems. It integrates information from different sources like digital maps, onboard sensors and V2X communication into one single model about vehicle's environment. At the moment of information aggregation some function independent situation analysis is done. In this paper the concept of the RoadGraph is described in detail and first results are shown.", "title": "" } ]
scidocsrr
5dc89122ca1e53951781f75b21942cfb
DAGER: Deep Age, Gender and Emotion Recognition Using Convolutional Neural Network
[ { "docid": "18cf88b01ff2b20d17590d7b703a41cb", "text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.", "title": "" } ]
[ { "docid": "2af524d484b7bb82db2dd92727a49fff", "text": "Computer-based multimedia learning environments — consisting of pictures (such as animation) and words (such as narration) — offer a potentially powerful venue for improving student understanding. How can we use words and pictures to help people understand how scientific systems work, such as how a lightning storm develops, how the human respiratory system operates, or how a bicycle tire pump works? This paper presents a cognitive theory of multimedia learning which draws on dual coding theory, cognitive load theory, and constructivist learning theory. Based on the theory, principles of instructional design for fostering multimedia learning are derived and tested. The multiple representation principle states that it is better to present an explanation in words and pictures than solely in words. The contiguity principle is that it is better to present corresponding words and pictures simultaneously rather than separately when giving a multimedia explanation. The coherence principle is that multimedia explanations are better understood when they include few rather than many extraneous words and sounds. The modality principle is that it is better to present words as auditory narration than as visual on-screen text. The redundancy principle is that it is better to present animation and narration than to present animation, narration, and on-screen text. By beginning with a cognitive theory of how learners process multimedia information, we have been able to conduct focused research that yields some preliminary principles of instructional design for multimedia messages.  2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "d60fb42ca7082289c907c0e2e2c343fc", "text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to", "title": "" }, { "docid": "f38bdbabdceacbf8b50739e6dd065876", "text": "Treatment of high-strength phenolic wastewater by a novel two-step method was investigated in the present study. The two-step treatment method consisted of chemical coagulation of the wastewater by metal chloride followed by further phenol reduction by resin adsorption. The present combined treatment was found to be highly efficient in removing the phenol concentration from the aqueous solution and was proved capable of lowering the initial phenol concentration from over 10,000 mg/l to below direct discharge level (1mg/l). In the experimental tests, appropriate conditions were identified for optimum treatment operation. Theoretical investigations were also performed for batch equilibrium adsorption and column adsorption of phenol by macroreticular resin. The empirical Freundlich isotherm was found to represent well the equilibrium phenol adsorption. The column model with appropriately identified model parameters could accurately predict the breakthrough times.", "title": "" }, { "docid": "0dd75eaa062ea30742e03b71d17119c5", "text": "Ayahuasca is a hallucinogenic beverage that combines the action of the 5-HT2A/2C agonist N,N-dimethyltryptamine (DMT) from Psychotria viridis with the monoamine oxidase inhibitors (MAOIs) induced by beta-carbonyls from Banisteriopsis caapi. Previous investigations have highlighted the involvement of ayahuasca with the activation of brain regions known to be involved with episodic memory, contextual associations and emotional processing after ayahuasca ingestion. Moreover long term users show better performance in neuropsychological tests when tested in off-drug condition. This study evaluated the effects of long-term administration of ayahuasca on Morris water maze (MWM), fear conditioning and elevated plus maze (EPM) performance in rats. Behavior tests started 48h after the end of treatment. Freeze-dried ayahuasca doses of 120, 240 and 480 mg/kg were used, with water as the control. Long-term administration consisted of a daily oral dose for 30 days by gavage. The behavioral data indicated that long-term ayahuasca administration did not affect the performance of animals in MWM and EPM tasks. However the dose of 120 mg/kg increased the contextual conditioned fear response for both background and foreground fear conditioning. The tone conditioned response was not affected after long-term administration. In addition, the increase in the contextual fear response was maintained during the repeated sessions several weeks after training. Taken together, these data showed that long-term ayahuasca administration in rats can interfere with the contextual association of emotional events, which is in agreement with the fact that the beverage activates brain areas related to these processes.", "title": "" }, { "docid": "c8b4ea815c449872fde2df910573d137", "text": "Two clinically distinct forms of Blount disease (early-onset and late-onset), based on whether the lower-limb deformity develops before or after the age of four years, have been described. Although the etiology of Blount disease may be multifactorial, the strong association with childhood obesity suggests a mechanical basis. A comprehensive analysis of multiplanar deformities in the lower extremity reveals tibial varus, procurvatum, and internal torsion along with limb shortening. Additionally, distal femoral varus is commonly noted in the late-onset form. When a patient has early-onset disease, a realignment tibial osteotomy before the age of four years decreases the risk of recurrent deformity. Gradual correction with distraction osteogenesis is an effective means of achieving an accurate multiplanar correction, especially in patients with late-onset disease.", "title": "" }, { "docid": "bffddca72c7e9d6e5a8c760758a98de0", "text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.", "title": "" }, { "docid": "60d0af0788a1b6641c722eafd0d1b8bb", "text": "Enhancing the quality of image is a continuous process in image processing related research activities. For some applications it becomes essential to have best quality of image such as in forensic department, where in order to retrieve maximum possible information, image has to be enlarged in terms of size, with higher resolution and other features associated with it. Such obtained high quality images have also a concern in satellite imaging, medical science, High Definition Television (HDTV), etc. In this paper a novel approach of getting high resolution image from a single low resolution image is discussed. The Non Sub-sampled Contourlet Transform (NSCT) based learning is used to learn the NSCT coefficients at the finer scale of the unknown high-resolution image from a dataset of high resolution images. The cost function consisting of a data fitting term and a Gabor prior term is optimized using an Iterative Back Projection (IBP). By making use of directional decomposition property of the NSCT and the Gabor filter bank with various orientations, the proposed method is capable to reconstruct an image with less edge artifacts. The validity of the proposed approach is proven through simulation on several images. RMS measures, PSNR measures and illustrations show the success of the proposed method.", "title": "" }, { "docid": "b7b664d1749b61f2f423d7080a240a60", "text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.", "title": "" }, { "docid": "cfb565adac45aec4597855d4b6d86e97", "text": "3 Cooccurrence and frequency counts 11 12 3.1 Surface cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 13 3.2 Textual cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 3.3 Syntactic cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 15 3.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 16", "title": "" }, { "docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d", "text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.", "title": "" }, { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "7343d29bfdc1a4466400f8752dce4622", "text": "We present a novel method for detecting occlusions and in-painting unknown areas of a light field photograph, based on previous work in obstruction-free photography and light field completion. An initial guess at separating the occluder from the rest of the photograph is computed by aligning backgrounds of the images and using this information to generate an occlusion mask. The masked pixels are then synthesized using a patch-based texture synthesis algorithm, with the median image as the source of each patch.", "title": "" }, { "docid": "174e4ef91fa7e2528e0e5a2a9f1e0c7c", "text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from slippage falling-down. A micro inertial measurement unit (muIMU) which is based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A weightless recognition algorithm is used for real-time falling determination. With the algorithm, the microcontroller integrated with muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to be fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa (gage pressure). Also, we present our progress on development of the inflator and the airbags", "title": "" }, { "docid": "5a092bc5bac7e36c71ad764768c2ac5a", "text": "Adolescence is characterized by making risky decisions. Early lesion and neuroimaging studies in adults pointed to the ventromedial prefrontal cortex and related structures as having a key role in decision-making. More recent studies have fractionated decision-making processes into its various components, including the representation of value, response selection (including inter-temporal choice and cognitive control), associative learning, and affective and social aspects. These different aspects of decision-making have been the focus of investigation in recent studies of the adolescent brain. Evidence points to a dissociation between the relatively slow, linear development of impulse control and response inhibition during adolescence versus the nonlinear development of the reward system, which is often hyper-responsive to rewards in adolescence. This suggests that decision-making in adolescence may be particularly modulated by emotion and social factors, for example, when adolescents are with peers or in other affective ('hot') contexts.", "title": "" }, { "docid": "473eb35bb5d3a85a4e9f5867aaf3c363", "text": "This paper develops techniques using which humans can be visually recognized. While face recognition would be one approach to this problem, we believe that it may not be always possible to see a person?s face. Our technique is complementary to face recognition, and exploits the intuition that human motion patterns and clothing colors can together encode several bits of information. Treating this information as a \"temporary fingerprint\", it may be feasible to recognize an individual with reasonable consistency, while allowing her to turn off the fingerprint at will.\n One application of visual fingerprints relates to augmented reality, in which an individual looks at other people through her camera-enabled glass (e.g., Google Glass) and views information about them. Another application is in privacy-preserving pictures ? Alice should be able to broadcast her \"temporary fingerprint\" to all cameras in the vicinity along with a privacy preference, saying \"remove me\". If a stranger?s video happens to include Alice, the device can recognize her fingerprint in the video and erase her completely. This paper develops the core visual fingerprinting engine ? InSight ? on the platform of Android smartphones and a backend server running MATLAB and OpenCV. Results from real world experiments show that 12 individuals can be discriminated with 90% accuracy using 6 seconds of video/motion observations. Video based emulation confirms scalability up to 40 users.", "title": "" }, { "docid": "7994b0cad77119ed42c964be6a05ab94", "text": "CONTEXT-AWARE ARGUMENT MINING AND ITS APPLICATIONS IN EDUCATION", "title": "" }, { "docid": "95db5921ba31588e962ffcd8eb6469b0", "text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this", "title": "" }, { "docid": "937bb3c066500ddffe8d3d78b3580c26", "text": "Multimodal semantic representation is an evolving area of research in natural language processing as well as computer vision. Combining or integrating perceptual information, such as visual features, with linguistic features is recently being actively studied. This paper presents a novel bimodal autoencoder model for multimodal representation learning: the autoencoder learns in order to enhance linguistic feature vectors by incorporating the corresponding visual features. During the runtime, owing to the trained neural network, visually enhanced multimodal representations can be achieved even for words for which direct visual-linguistic correspondences are not learned. The empirical results obtained with standard semantic relatedness tasks demonstrate that our approach is generally promising. We further investigate the potential efficacy of the enhanced word embeddings in discriminating antonyms and synonyms from vaguely related words.", "title": "" }, { "docid": "64e573006e2fb142dba1b757b1e4f20d", "text": "Online learning algorithms often have to operate in the presence of concept drift (i.e., the concepts to be learned can change with time). This paper presents a new categorization for concept drift, separating drifts according to different criteria into mutually exclusive and nonheterogeneous categories. Moreover, although ensembles of learning machines have been used to learn in the presence of concept drift, there has been no deep study of why they can be helpful for that and which of their features can contribute or not for that. As diversity is one of these features, we present a diversity analysis in the presence of different types of drifts. We show that, before the drift, ensembles with less diversity obtain lower test errors. On the other hand, it is a good strategy to maintain highly diverse ensembles to obtain lower test errors shortly after the drift independent on the type of drift, even though high diversity is more important for more severe drifts. Longer after the drift, high diversity becomes less important. Diversity by itself can help to reduce the initial increase in error caused by a drift, but does not provide the faster recovery from drifts in long-term.", "title": "" } ]
scidocsrr
4d8b0f9058f9468c453375d60c45c2eb
A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors
[ { "docid": "74ae28cf8b7f458b857b49748573709d", "text": "Muscle fiber conduction velocity is based on the ti me delay estimation between electromyography recording channels. The aims of this study is to id entify the best estimator of generalized correlati on methods in the case where time delay is constant in order to extent these estimator to the time-varyin g delay case . The fractional part of time delay was c lculated by using parabolic interpolation. The re sults indicate that Eckart filter and Hannan Thomson (HT ) give the best results in the case where the signa l to noise ratio (SNR) is 0 dB.", "title": "" } ]
[ { "docid": "aeabcc9117801db562d83709fda22722", "text": "The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project). © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dcd21065898c9dd108617a3db8dad6a1", "text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.", "title": "" }, { "docid": "d6477bab69274263bc208d19d9ec3ec2", "text": "Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.", "title": "" }, { "docid": "c4ebb90bad820a3aba5f0746791b3b5c", "text": "This paper is concerned with the problem of finding a sparse graph capturing the conditional dependence between the entries of a Gaussian random vector, where the only available information is a sample correlation matrix. A popular approach is to solve a graphical lasso problem with a sparsity-promoting regularization term. This paper derives a simple condition under which the computationally-expensive graphical lasso behaves the same as the simple heuristic method of thresholding. This condition depends only on the solution of graphical lasso and makes no direct use of the sample correlation matrix or the regularization coefficient. It is also proved that this condition is always satisfied if the solution of graphical lasso is replaced by its first-order Taylor approximation. The condition is tested on several random problems and it is shown that graphical lasso and the thresholding method (based on the correlation matrix) lead to a similar result (if not equivalent), provided the regularization term is high enough to seek a sparse graph.", "title": "" }, { "docid": "4d449388969075c56b921f9183fbc7b5", "text": "Tasks such as question answering and semantic search are dependent on the ability of querying & reasoning over large-scale commonsense knowledge bases (KBs). However, dealing with commonsense data demands coping with problems such as the increase in schema complexity, semantic inconsistency, incompleteness and scalability. This paper proposes a selective graph navigation mechanism based on a distributional relational semantic model which can be applied to querying & reasoning over heterogeneous knowledge bases (KBs). The approach can be used for approximative reasoning, querying and associational knowledge discovery. In this paper we focus on commonsense reasoning as the main motivational scenario for the approach. The approach focuses on addressing the following problems: (i) providing a semantic selection mechanism for facts which are relevant and meaningful in a specific reasoning & querying context and (ii) allowing coping with information incompleteness in large KBs. The approach is evaluated using ConceptNet as a commonsense KB, and achieved high selectivity, high scalability and high accuracy in the selection of meaningful navigational paths. Distributional semantics is also used as a principled mechanism to cope with information incompleteness.", "title": "" }, { "docid": "53a05c0438a0a26c8e3e74e1fa7b192b", "text": "This paper presents a simple method based on sinusoidal-amplitude detector for realizing the resolver-signal demodulator. The proposed demodulator consists of two full-wave rectifiers, two ±unity-gain amplifiers, and two sinusoidal-amplitude detectors with control switches. Two output voltages are proportional to sine and cosine envelopes of resolver-shaft angle without low-pass filter. Experimental results demonstrating characteristic of the proposed circuit are included.", "title": "" }, { "docid": "1c89a187c4d930120454dfffaa1e7d5b", "text": "Many researches in face recognition have been dealing with the challenge of the great variability in head pose, lighting intensity and direction,facial expression, and aging. The main purpose of this overview is to describe the recent 3D face recognition algorithms. The last few years more and more 2D face recognition algorithms are improved and tested on less than perfect images. However, 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. Another major advantage is that 3D face recognition is pose invariant. A disadvantage of most presented 3D face recognition methods is that they still treat the human face as a rigid object. This means that the methods aren’t capable of handling facial expressions. Although 2D face recognition still seems to outperform the 3D face recognition methods, it is expected that this will change in the near future.", "title": "" }, { "docid": "83c81ecb870e84d4e8ab490da6caeae2", "text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.", "title": "" }, { "docid": "02926cfd609755bc938512545af08cb7", "text": "An efficient genetic transformation method for kabocha squash (Cucurbita moschata Duch cv. Heiankogiku) was established by wounding cotyledonary node explants with aluminum borate whiskers prior to inoculation with Agrobacterium. Adventitious shoots were induced from only the proximal regions of the cotyledonary nodes and were most efficiently induced on Murashige–Skoog agar medium with 1 mg/L benzyladenine. Vortexing with 1% (w/v) aluminum borate whiskers significantly increased Agrobacterium infection efficiency in the proximal region of the explants. Transgenic plants were screened at the T0 generation by sGFP fluorescence, genomic PCR, and Southern blot analyses. These transgenic plants grew normally and T1 seeds were obtained. We confirmed stable integration of the transgene and its inheritance in T1 generation plants by sGFP fluorescence and genomic PCR analyses. The average transgenic efficiency for producing kabocha squashes with our method was about 2.7%, a value sufficient for practical use.", "title": "" }, { "docid": "1fa6ee7cf37d60c182aa7281bd333649", "text": "To cope with the explosion of information in mathematics and physics, we need a unified mathematical language to integrate ideas and results from diverse fields. Clifford Algebra provides the key to a unifled Geometric Calculus for expressing, developing, integrating and applying the large body of geometrical ideas running through mathematics and physics.", "title": "" }, { "docid": "29dcdc7c19515caad04c6fb58e7de4ea", "text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.", "title": "" }, { "docid": "3e80fb154cb594dc15f5318b774cf0c3", "text": "Progressive multifocal leukoencephalopathy (PML) is a rare, subacute, demyelinating disease of the central nervous system caused by JC virus. Studies of PML from HIV Clade C prevalent countries are scarce. We sought to study the clinical, neuroimaging, and pathological features of PML in HIV Clade C patients from India. This is a prospective cum retrospective study, conducted in a tertiary care Neurological referral center in India from Jan 2001 to May 2012. Diagnosis was considered “definite” (confirmed by histopathology or JCV PCR in CSF) or “probable” (confirmed by MRI brain). Fifty-five patients of PML were diagnosed between January 2001 and May 2012. Complete data was available in 38 patients [mean age 39 ± 8.9 years; duration of illness—82.1 ± 74.7 days). PML was prevalent in 2.8 % of the HIV cohort seen in our Institute. Hemiparesis was the commonest symptom (44.7 %), followed by ataxia (36.8 %). Definitive diagnosis was possible in 20 cases. Eighteen remained “probable” wherein MRI revealed multifocal, symmetric lesions, hypointense on T1, and hyperintense on T2/FLAIR. Stereotactic biopsy (n = 11) revealed demyelination, enlarged oligodendrocytes with intranuclear inclusions and astrocytosis. Immunohistochemistry revelaed the presence of JC viral antigen within oligodendroglial nuclei and astrocytic cytoplasm. No differences in clinical, radiological, or pathological features were evident from PML associated with HIV Clade B. Clinical suspicion of PML was entertained in only half of the patients. Hence, a high index of suspicion is essential for diagnosis. There are no significant differences between clinical, radiological, and pathological picture of PML between Indian and Western countries.", "title": "" }, { "docid": "126aaec3593ab395c046098d5136fe10", "text": "This paper presents the SocioMetric Badges Corpus, a new corpus for social interaction studies collected during a 6 weeks contiguous period in a research institution, monitoring the activity of 53 people. The design of the corpus was inspired by the need to provide researchers and practitioners with: a) raw digital trace data that could be used to directly address the task of investigating, reconstructing and predicting people's actual social behavior in complex organizations, b) information about participants' individual characteristics (e.g., personality traits), along with c) data concerning the general social context (e.g., participants' social networks) and the specific situations they find themselves in.", "title": "" }, { "docid": "5d97670a243d1b1b5b5d1d6c47570820", "text": "In the 21st century, social media has burgeoned into one of the most used channels of communication in the society. As social media becomes well recognised for its potential as a social communication channel, recent years have witnessed an increased interest of using social media in higher education (Alhazmi, & Abdul Rahman, 2013; Al-rahmi, Othman, & Musa, 2014; Al-rahmi, & Othman, 2013a; Chen, & Bryer, 2012; Selwyn, 2009, 2012 to name a few). A survey by Pearson (Seaman, & Tinti-kane, 2013), The Social Media Survey 2013 shows that 41% of higher education faculty in the U.S.A. population has use social media in teaching in 2013 compared to 34% of them using it in 2012. The survey results also show the increase use of social media for teaching by educators and faculty professionals has increase because they see the potential in applying and integrating social media technology to their teaching. Many higher education institutions and educators are now finding themselves expected to catch up with the world of social media applications and social media users. This creates a growing phenomenon for the educational use of social media to create, engage, and share existing or newly produced information between lecturers and students as well as among the students. Facebook has quickly become the social networking site of choice by university students due to its remarkable adoption rates of Facebook in universities (Muñoz, & Towner, 2009; Roblyer et al., 2010; Sánchez, Cortijo, & Javed, 2014). With this in mind, this paper aims to investigate the use of Facebook closed group by undergraduate students in a private university in the Klang Valley, Malaysia. It is also to analyse the interaction pattern among the students using the Facebook closed group pages.", "title": "" }, { "docid": "1d084096acea83a62ecc6b010b302622", "text": "The investigation of human activity patterns from location-based social networks like Twitter is an established approach of how to infer relationships and latent information that characterize urban structures. Researchers from various disciplines have performed geospatial analysis on social media data despite the data’s high dimensionality, complexity and heterogeneity. However, user-generated datasets are of multi-scale nature, which results in limited applicability of commonly known geospatial analysis methods. Therefore in this paper, we propose a geographic, hierarchical self-organizing map (Geo-H-SOM) to analyze geospatial, temporal and semantic characteristics of georeferenced tweets. The results of our method, which we validate in a case study, demonstrate the ability to explore, abstract and cluster high-dimensional geospatial and semantic information from crowdsourced data. ARTICLE HISTORY Received 8 April 2015 Accepted 19 September 2015", "title": "" }, { "docid": "6e8a9c37672ec575821da5c9c3145500", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e75f830b902ca7d0e8d9e9fa03a62440", "text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.", "title": "" }, { "docid": "239e37736832f6f0de050ed1749ba648", "text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.", "title": "" }, { "docid": "3224233a8a91c8d44e366b7b2ab8e7a1", "text": "In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.", "title": "" }, { "docid": "7c6fa8d48ad058f1c65f1c775b71e4b5", "text": "A new method for determining nucleotide sequences in DNA is described. It is similar to the \"plus and minus\" method [Sanger, F. & Coulson, A. R. (1975) J. Mol. Biol. 94, 441-448] but makes use of the 2',3'-dideoxy and arabinonucleoside analogues of the normal deoxynucleoside triphosphates, which act as specific chain-terminating inhibitors of DNA polymerase. The technique has been applied to the DNA of bacteriophage varphiX174 and is more rapid and more accurate than either the plus or the minus method.", "title": "" } ]
scidocsrr
7d786ea784346a8ed03ca411fb44aed2
Automatic nonverbal behavior indicators of depression and PTSD: the effect of gender
[ { "docid": "f5bf18165f82b2fabdf43fbfed70a0fd", "text": "Depression is a typical mood disorder, and the persons who are often in this state face the risk in mental and even physical problems. In recent years, there has therefore been increasing attention in machine based depression analysis. In such a low mood, both the facial expression and voice of human beings appear different from the ones in normal states. This paper presents a novel method, which comprehensively models visual and vocal modalities, and automatically predicts the scale of depression. On one hand, Motion History Histogram (MHH) extracts the dynamics from corresponding video and audio data to represent characteristics of subtle changes in facial and vocal expression of depression. On the other hand, for each modality, the Partial Least Square (PLS) regression algorithm is applied to learn the relationship between the dynamic features and depression scales using training data, and then predict the depression scale for an unseen one. Predicted values of visual and vocal clues are further combined at decision level for final decision. The proposed approach is evaluated on the AVEC2013 dataset and experimental results clearly highlight its effectiveness and better performance than baseline results provided by the AVEC2013 challenge organiser.", "title": "" } ]
[ { "docid": "ffc2079d68489ea7fae9f55ffd288018", "text": "Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.", "title": "" }, { "docid": "2bb194184bea4b606ec41eb9eee0bfaa", "text": "Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.", "title": "" }, { "docid": "b9b68f6e2fd049d588d6bdb0c4878640", "text": "Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks -- at the level of small network subgraphs -- remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.\n Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "title": "" }, { "docid": "6a32d9e43d7f4558fa6dbbc596ce4496", "text": "Automatically mapping natural language into programming language semantics has always been a major and interesting challenge. In this paper, we approach such problem by carrying out mapping at syntactic level and then applying machine learning algorithms to derive an automatic translator of natural language questions into their associated SQL queries. For this purpose, we design a dataset of relational pairs containing syntactic trees of questions and queries and we encode them in Support Vector Machines by means of kernel functions. Pair classification experiments suggest that our approach is promising in deriving shared semantics between the languages above.", "title": "" }, { "docid": "49ff096deb6621438286942b792d6af3", "text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.", "title": "" }, { "docid": "2ac5b08573e8b243ac0eb5b6ab10c73d", "text": "The use of virtual reality (VR) display systems has escalated over the last 5 yr and may have consequences for those working within vision research. This paper provides a brief review of the literature pertaining to the representation of depth in stereoscopic VR displays. Specific attention is paid to the response of the accommodation system with its cross-links to vergence eye movements, and to the spatial errors that arise when portraying three-dimensional space on a two-dimensional window. It is suggested that these factors prevent large depth intervals of three-dimensional visual space being rendered with integrity through dual two-dimensional arrays.", "title": "" }, { "docid": "522363d36c93b692265c42f9f3976461", "text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.", "title": "" }, { "docid": "67269d2f4cc4b4ac07c855e3dfaca4ca", "text": "Electronic textiles, or e-textiles, are an increasingly important part of wearable computing, helping to make pervasive devices truly wearable. These soft, fabric-based computers can function as lovely embodiments of Mark Weiser's vision of ubiquitous computing: providing useful functionality while disappearing discreetly into the fabric of our clothing. E-textiles also give new, expressive materials to fashion designers, textile designers, and artists, and garments stemming from these disciplines usually employ technology in visible and dramatic style. Integrating computer science, electrical engineering, textile design, and fashion design, e-textiles cross unusual boundaries, appeal to a broad spectrum of people, and provide novel opportunities for creative experimentation both in engineering and design. Moreover, e-textiles are cutting- edge technologies that capture people's imagination in unusual ways. (What other emerging pervasive technology has Vogue magazine featured?) Our work aims to capitalize on these unique features by providing a toolkit that empowers novices to design, engineer, and build their own e-textiles.", "title": "" }, { "docid": "b52eb0d80b64fc962b17fb08ce446e12", "text": "INTRODUCTION\nPriapism describes a persistent erection arising from dysfunction of mechanisms regulating penile tumescence, rigidity, and flaccidity. A correct diagnosis of priapism is a matter of urgency requiring identification of underlying hemodynamics.\n\n\nAIMS\nTo define the types of priapism, address its pathogenesis and epidemiology, and develop an evidence-based guideline for effective management.\n\n\nMETHODS\nSix experts from four countries developed a consensus document on priapism; this document was presented for peer review and debate in a public forum and revisions were made based on recommendations of chairpersons to the International Consultation on Sexual Medicine. This report focuses on guidelines written over the past decade and reviews the priapism literature from 2003 to 2009. Although the literature is predominantly case series, recent reports have more detailed methodology including duration of priapism, etiology of priapism, and erectile function outcomes.\n\n\nMAIN OUTCOME MEASURES\nConsensus recommendations were based on evidence-based literature, best medical practices, and bench research.\n\n\nRESULTS\nBasic science supporting current concepts in the pathophysiology of priapism, and clinical research supporting the most effective treatment strategies are summarized in this review.\n\n\nCONCLUSIONS\nPrompt diagnosis and appropriate management of priapism are necessary to spare patients ineffective interventions and maximize erectile function outcomes. Future research is needed to understand corporal smooth muscle pathology associated with genetic and acquired conditions resulting in ischemic priapism. Better understanding of molecular mechanisms involved in the pathogenesis of stuttering ischemic priapism will offer new avenues for medical intervention. Documenting erectile function outcomes based on duration of ischemic priapism, time to interventions, and types of interventions is needed to establish evidence-based guidance. In contrast, pathogenesis of nonischemic priapism is understood, and largely attributable to trauma. Better documentation of onset of high-flow priapism in relation to time of injury, and response to conservative management vs. angiogroaphic or surgical interventions is needed to establish evidence-based guidance.", "title": "" }, { "docid": "4f64e7ff2bed569d73da9cae011e995d", "text": "Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set.", "title": "" }, { "docid": "719fab5525df0847e2cdd015bb2795ff", "text": "The future smart grid is envisioned as a large scale cyberphysical system encompassing advanced power, communications, control, and computing technologies. To accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyberphysical systems. In this context, this article is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: microgrid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game-theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the application of game theory in smart grid systems tailored to the interdisciplinary characteristics of these systems that integrate components from power systems, networking, communications, and control.", "title": "" }, { "docid": "1865cf66083c30d74b555eab827d0f5f", "text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.", "title": "" }, { "docid": "e8933b0afcd695e492d5ddd9f87aeb81", "text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.", "title": "" }, { "docid": "150f27f47e9ffd6cd4bc0756bd08aed4", "text": "Sunni extremism poses a significant danger to society, yet it is relatively easy for these extremist organizations to spread jihadist propaganda and recruit new members via the Internet, Darknet, and social media. The sheer volume of these sites make them very difficult to police. This paper discusses an approach that can assist with this problem, by automatically identifying a subset of web pages and social media content (or any text) that contains extremist content. The approach utilizes machine learning, specifically neural networks and deep learning, to classify text as containing “extremist” or “benign” (i.e., not extremist) content. This method is robust and can effectively learn to classify extremist multilingual text of varying length. This study also involved the construction of a high quality dataset for training and testing, put together by a team of 40 people (some with fluency in Arabic) who expended 9,500 hours of combined effort. This dataset should facilitate future research on this topic.", "title": "" }, { "docid": "e0382c9d739281b4bc78f4a69827ac37", "text": "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.", "title": "" }, { "docid": "bf21fd50b793f74d5d0b026177552d2e", "text": "This paper aims to evaluate the security and accuracy of Multi-Factor Biometric Authentication (MFBA) schemes that are based on applying UserBased Transformations (UBTs) on biometric features. Typically, UBTs employ transformation keys generated from passwords/PINs or retrieved from tokens. In this paper, we not only highlight the importance of simulating the scenario of compromised transformation keys rigorously, but also show that there has been misevaluation of this scenario as the results can be easily misinterpreted. In particular, we expose the falsehood of the widely reported claim in the literature that in the case of stolen keys, authentication accuracy drops but remains close to the authentication accuracy of biometric only system. We show that MFBA systems setup to operate at zero (%) Equal Error Rates (EER) can be undermined in the event of keys being compromised where the False Acceptance Rate reaches unacceptable levels. We demonstrate that for commonly used recognition schemes the FAR could be as high as 21%, 56%, and 66% for iris, fingerprint, and face biometrics respectively when using stolen transformation keys compared to near zero (%) EER when keys are assumed secure. We also discuss the trade off between improving accuracy of biometric systems using additional authentication factor(s) and compromising the security when the additional factor(s) are compromised. Finally, we propose mechanisms to enhance the security as well as the accuracy of MFBA schemes.", "title": "" }, { "docid": "22445127362a9a2b16521a4a48f24686", "text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.", "title": "" }, { "docid": "0332be71a529382e82094239db31ea25", "text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).", "title": "" }, { "docid": "bb6314a8e6ec728d09aa37bfffe5c835", "text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.", "title": "" }, { "docid": "0cbb4731b58c440752847874bfdad63a", "text": "In order to increase accuracy of the linear array CCD edge detection system, a wavelet-based sub-pixel edge detection method is proposed, the basic process is like this: firstly, according to the step gradient features, automatically calculate the pixel-level border of the CCD image. Then use the wavelet transform algorithm to devide the image’s edge location in sub-pixel level, thus detecting the sub-pixel edge. In this way we prove that the method has no principle error and at the same time possesses a good anti-noise performance. Experiments show that under the circumstance of no special requirements, the accuracy of the method is greater than 0.02 pixel, thus verifying the correctness of the theory.", "title": "" } ]
scidocsrr
714fb6dba1be46c6082bc417faf4dcbb
Robust 2D/3D face mask presentation attack detection scheme by exploring multiple features and comparison score level fusion
[ { "docid": "db5865f8f8701e949a9bb2f41eb97244", "text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.", "title": "" }, { "docid": "2967df08ad0b9987ce2d6cb6006d3e69", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" } ]
[ { "docid": "53477003e3c57381201a69e7cc54cfc9", "text": "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.", "title": "" }, { "docid": "69f853b90b837211e24155a2f55b9a95", "text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.", "title": "" }, { "docid": "630e44732755c47fc70be111e40c7b67", "text": "An algebra for geometric reasoning is developed that is amenable to software implementation. The features of the algebra are chosen to support geometric programming of the variety found in computer graphics and computer aided geometric design applications. The implementation of the algebra in C++ is described, and several examples illustrating the use of this software are given.", "title": "" }, { "docid": "071ba3d1cec138011f398cae8589b77b", "text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ff91ed2072c93eeae5f254fb3de0d780", "text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.", "title": "" }, { "docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc", "text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.", "title": "" }, { "docid": "737bc68c51d2ae7665c47a060da3e25f", "text": "Self-regulatory strategies of goal setting and goal striving are analyzed in three experiments. Experiment 1 uses fantasy realization theory (Oettingen, in: J. Brandstätter, R.M. Lerner (Eds.), Action and Self Development: Theory and Research through the Life Span, Sage Publications Inc, Thousand Oaks, CA, 1999, pp. 315-342) to analyze the self-regulatory processes of turning free fantasies about a desired future into binding goals. School children 8-12 years of age who had to mentally elaborate a desired academic future as well as present reality standing in its way, formed stronger goal commitments than participants solely indulging in the desired future or merely dwelling on present reality (Experiment 1). Effective implementation of set goals is addressed in the second and third experiments (Gollwitzer, Am. Psychol. 54 (1999) 493-503). Adolescents who had to furnish a set educational goal with relevant implementation intentions (specifying where, when, and how they would start goal pursuit) were comparatively more successful in meeting the goal (Experiment 2). Linking anticipated si tuations with goal-directed behaviors (i.e., if-then plans) rather than the mere thinking about good opportunities to act makes implementation intentions facilitate action initiation (Experiment 3). ©2001 Elsevier Science Ltd. All rights reserved. _____________________________________________________________________________________ Successful goal attainment demands completing two different tasks. People have to first turn their desires into binding goals, and second they have to attain the set goal. Both tasks benefit from selfregulatory strategies. In this article we describe a series of experiments with children, adolescents, and young adults that investigate self-regulatory processes facilitating effective goal setting and successful goal striving. The experimental studies investigate (1) different routes to goal setting depending on how", "title": "" }, { "docid": "3c8ac7bd31d133b4d43c0d3a0f08e842", "text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.", "title": "" }, { "docid": "893437dbc30509dc5a1133ab74d4b78b", "text": "Light scattered from multiple surfaces can be used to retrieve information of hidden environments. However, full three-dimensional retrieval of an object hidden from view by a wall has only been achieved with scanning systems and requires intensive computational processing of the retrieved data. Here we use a non-scanning, single-photon single-pixel detector in combination with a deep convolutional artificial neural network: this allows us to locate the position and to also simultaneously provide the actual identity of a hidden person, chosen from a database of people (N = 3). Artificial neural networks applied to specific computational imaging problems can therefore enable novel imaging capabilities with hugely simplified hardware and processing times.", "title": "" }, { "docid": "5b55b1c913aa9ec461c6c51c3d00b11b", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" }, { "docid": "da4d3534f0f8cf463d4dfff9760b68f4", "text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.", "title": "" }, { "docid": "729b29b5ab44102541f3ebf8d24efec3", "text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.", "title": "" }, { "docid": "4e2b0d647da57a96085786c5aa2d15d9", "text": "We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropyregularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.", "title": "" }, { "docid": "e0217457b00d4c1ba86fc5d9faede342", "text": "This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.", "title": "" }, { "docid": "02138b6fea0d80a6c365cafcc071e511", "text": "Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system. This concept accompanies the dynamics of thermalization in closed quantum systems, and has recently emerged as a powerful tool for characterizing chaos in black holes1–4. However, the direct experimental measurement of quantum scrambling is difficult, owing to the exponential complexity of ergodic many-body entangled states. One way to characterize quantum scrambling is to measure an out-of-time-ordered correlation function (OTOC); however, because scrambling leads to their decay, OTOCs do not generally discriminate between quantum scrambling and ordinary decoherence. Here we implement a quantum circuit that provides a positive test for the scrambling features of a given unitary process5,6. This approach conditionally teleports a quantum state through the circuit, providing an unambiguous test for whether scrambling has occurred, while simultaneously measuring an OTOC. We engineer quantum scrambling processes through a tunable three-qubit unitary operation as part of a seven-qubit circuit on an ion trap quantum computer. Measured teleportation fidelities are typically about 80 per cent, and enable us to experimentally bound the scrambling-induced decay of the corresponding OTOC measurement. A quantum circuit in an ion-trap quantum computer provides a positive test for the scrambling features of a given unitary process.", "title": "" }, { "docid": "8321eecac6f8deb25ffd6c1b506c8ee3", "text": "Propelled by a fast evolving landscape of techniques and datasets, data science is growing rapidly. Against this background, topological data analysis (TDA) has carved itself a niche for the analysis of datasets that present complex interactions and rich structures. Its distinctive feature, topology, allows TDA to detect, quantify and compare the mesoscopic structures of data, while also providing a language able to encode interactions beyond networks. Here we briefly present the TDA paradigm and some applications, in order to highlight its relevance to the data science community.", "title": "" }, { "docid": "db2e7cc9ea3d58e0c625684248e2ef80", "text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.", "title": "" }, { "docid": "06e04aec6dccf454b63c98b4c5e194e3", "text": "Existing measures of peer pressure and conformity may not be suitable for screening large numbers of adolescents efficiently, and few studies have differentiated peer pressure from theoretically related constructs, such as conformity or wanting to be popular. We developed and validated short measures of peer pressure, peer conformity, and popularity in a sample ( n= 148) of adolescent boys and girls in grades 11 to 13. Results showed that all measures constructed for the study were internally consistent. Although all measures of peer pressure, conformity, and popularity were intercorrelated, peer pressure and peer conformity were stronger predictors of risk behaviors than measures assessing popularity, general conformity, or dysphoria. Despite a simplified scoring format, peer conformity vignettes were equal to if not better than the peer pressure measures in predicting risk behavior. Findings suggest that peer pressure and peer conformity are potentially greater risk factors than a need to be popular, and that both peer pressure and peer conformity can be measured with short scales suitable for large-scale testing.", "title": "" }, { "docid": "6c532169b4e169b9060ab9e17cb42602", "text": "The complete nucleotide sequence of tomato infectious chlorosis virus (TICV) was determined and compared with those of other members of the genus Crinivirus. RNA 1 is 8,271 nucleotides long with three open reading frames and encodes proteins involved in replication. RNA 2 is 7,913 nucleotides long and encodes eight proteins common within the genus Crinivirus that are involved in genome protection, movement and other functions yet to be identified. Similarity between TICV and other criniviruses varies throughout the genome but TICV is related more closely to lettuce infectious yellows virus than to any other crinivirus, thus identifying a third group within the genus.", "title": "" } ]
scidocsrr
370c6f1eee3d5470541dfaf9052d800c
Regressing a 3D Face Shape from a Single Image
[ { "docid": "9ecb74866ca42b7fd559145deaed52a4", "text": "We present an efficient and robust method of locating a set of feature points in an object of interest. From a training set we construct a joint model of the appearance of each feature together with their relative positions. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Models (AAM) [T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, in: Proceedings of the 5th European Conference on Computer Vision 1998, vol. 2, Freiburg, Germany, 1998.]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to a wide range of data sets, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on photographs of human faces, magnetic resonance (MR) images of the brain and a set of dental panoramic tomograms. We also show improved tracking performance on a challenging set of in car video sequences. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1d73817f8b1b54a82308106ee526a62b", "text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.", "title": "" }, { "docid": "2f1ba4ba5cff9a6e614aa1a781bf1b13", "text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.", "title": "" } ]
[ { "docid": "c4577ac95efb55a07e0748a10a9d4658", "text": "This paper describes the design of a six-axis microelectromechanical systems (MEMS) force-torque sensor. A movable body is suspended by flexures that allow deflections and rotations along the x-, y-, and z-axes. The orientation of this movable body is sensed by seven capacitors. Transverse sensing is used for all capacitors, resulting in a high sensitivity. A batch fabrication process is described as capable of fabricating these multiaxis sensors with a high yield. The force sensor is experimentally investigated, and a multiaxis calibration method is described. Measurements show that the resolution is on the order of a micro-Newton and nano-Newtonmeter. This is the first six-axis MEMS force sensor that has been successfully developed.", "title": "" }, { "docid": "f6574fbbdd53b2bc92af485d6c756df0", "text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.", "title": "" }, { "docid": "27e60092f83e7572a5a7776113d8c97c", "text": "Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.", "title": "" }, { "docid": "deed140862c62fa8be4a8a58ffc1d7dc", "text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "fb75e0c18c4852afac162b60554b67b1", "text": "OBJECTIVE\nTo evaluate the feasibility and safety of home rehabilitation of the hand using a robotic glove, and, in addition, its effectiveness, in hemiplegic patients after stroke.\n\n\nMETHODS\nIn this non-randomized pilot study, 21 hemiplegic stroke patients (Ashworth spasticity index ≤ 3) were prescribed, after in-hospital rehabilitation, a 2-month home-program of intensive hand training using the Gloreha Lite glove that provides computer-controlled passive mobilization of the fingers. Feasibility was measured by: number of patients who completed the home-program, minutes of exercise and number of sessions/patient performed. Safety was assessed by: hand pain with a visual analog scale (VAS), Ashworth spasticity index for finger flexors, opponents of the thumb and wrist flexors, and hand edema (circumference of forearm, wrist and fingers), measured at start (T0) and end (T1) of rehabilitation. Hand motor function (Motricity Index, MI), fine manual dexterity (Nine Hole Peg Test, NHPT) and strength (Grip test) were also measured at T0 and T1.\n\n\nRESULTS\nPatients performed, over a mean period 56 (49-63) days, a total of 1699 (1353-2045) min/patient of exercise with Gloreha Lite, 5.1 (4.3-5.8) days/week. Seventeen patients (81%) completed the full program. The mean VAS score of hand pain, Ashworth spasticity index and hand edema did not change significantly at T1 compared to T0. The MI, NHPT and Grip test improved significantly (p = 0.0020, 0.0156 and 0.0024, respectively) compared to baseline.\n\n\nCONCLUSION\nGloreha Lite is feasible and safe for use in home rehabilitation. The efficacy data show a therapeutic effect which need to be confirmed by a randomized controlled study.", "title": "" }, { "docid": "cebc36cd572740069ab22e8181c405c4", "text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.", "title": "" }, { "docid": "c3f7a3a4e31a610e6ecc149cede3db30", "text": "OBJECTIVES\nCross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this analysis of cross-language qualitative research was threefold: (1) review the methods literature addressing cross-language research; (2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; (3) test these criteria on published cross-language qualitative studies.\n\n\nDATA SOURCES\nA group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals.\n\n\nREVIEW METHODS\nThe synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies.\n\n\nRESULTS\nThe evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant's language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research.\n\n\nCONCLUSIONS\nThe finding about researchers making the role of the translator or interpreter invisible during the research process supports studies completed by other authors examining this issue. The analysis demonstrated that the criteria produced by this study may provide useful guidelines for evaluating cross-language research and for novice cross-language researchers designing their first studies. Finally, the study also indicates that researchers attempting cross-language studies need to address the methodological issues surrounding language barriers between researchers and participants more systematically.", "title": "" }, { "docid": "9738485d5c61ac43e3a1e101b063dfd5", "text": "Sentiment analysis is one of the most popular natural language processing techniques. It aims to identify the sentiment polarity (positive, negative, neutral or mixed) within a given text. The proper lexicon knowledge is very important for the lexicon-based sentiment analysis methods since they hinge on using the polarity of the lexical item to determine a text's sentiment polarity. However, it is quite common that some lexical items appear positive in the text of one domain but appear negative in another. In this paper, we propose an innovative knowledge building algorithm to extract sentiment lexicon knowledge through computing their polarity value based on their polarity distribution in text dataset, such as in a set of domain specific reviews. The proposed algorithm was tested by a set of domain microblogs. The results demonstrate the effectiveness of the proposed method. The proposed lexicon knowledge extraction method can enhance the performance of knowledge based sentiment analysis.", "title": "" }, { "docid": "a59f82d98f978701d6a4271db1674d2a", "text": "Hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image; however, when used in statistical pattern-classification tasks, the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations. Popular dimensionality-reduction techniques such as principal component analysis, linear discriminant analysis, and their variants typically assume a Gaussian distribution. The quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-Gaussian class-conditional distributions. Departing from this single-Gaussian assumption, a classification paradigm designed to exploit the rich statistical structure of the data is proposed. The proposed framework employs local Fisher's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, while a subsequent Gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data. Experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives.", "title": "" }, { "docid": "df70cb4b1d37680cccb7d79bdea5d13b", "text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.", "title": "" }, { "docid": "b51021e995fc4be50028a0a152db7e7a", "text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.", "title": "" }, { "docid": "3ed5ec863971e04523a7ede434eaa80d", "text": "This article reports on the design, implementation, and usage of the CourseMarker (formerly known as CourseMaster) courseware Computer Based Assessment (CBA) system at the University of Nottingham. Students use CourseMarker to solve (programming) exercises and to submit their solutions. CourseMarker returns immediate results and feedback to the students. Educators author a variety of exercises that benefit the students while offering practical benefits. To date, both educators and students have been hampered by CBA software that has been constructed to assess text-based or multiple-choice answers only. Although there exist a few CBA systems with some capability to automatically assess programming coursework, none assess Java programs and none are as flexible, architecture-neutral, robust, or secure as the CourseMarker CBA system.", "title": "" }, { "docid": "0b231777fedf27659b4558aaabb872be", "text": "Recognizing multiple mixed group activities from one still image is not a hard problem for humans but remains highly challenging for computer recognition systems. When modelling interactions among multiple units (i.e., more than two groups or persons), the existing approaches tend to divide them into interactions between pairwise units. However, no mathematical evidence supports this transformation. Therefore, these approaches’ performance is limited on images containing multiple activities. In this paper, we propose a generative model to provide a more reasonable interpretation for the mixed group activities contained in one image. We design a four level structure and convert the original intra-level interactions into inter-level interactions, in order to implement both interactions among multiple groups and interactions among multiple persons within a group. The proposed four-level structure makes our model more robust against the occlusion and overlap of the visible poses in images. Experimental results demonstrate that our model makes good interpretations for mixed group activities and outperforms the state-of-the-art methods on the Collective Activity Classification dataset.", "title": "" }, { "docid": "eec5034991f82e0d809aba5e3eb94fe2", "text": "This paper considers John Dewey’s dual reformist-preservationist agenda for education in the context of current debates about the role of experience in management learning. The paper argues for preserving experience-based approaches to management learning by revising the concept of experience to more clearly account for the relationship between personal and social (i.e. , tacit/explicit) knowledge. By reviewing, comparing and extending critiques of Kolb’s experiential learning theory and reconceptualizing the learning process based on post-structural analysis of psychoanalyst Jacque Lacan, the paper defines experience within the context of language and social action. This perspective is contrasted to action, cognition, critical reflection and other experience-based approaches to management learning. Implications for management theory, pedagogy and practice suggest greater emphasis on language and conversation in the learning process. Future directions for research are explored.", "title": "" }, { "docid": "3900864885cf79e33683ec5c595235ad", "text": "Digital mammogram has become the most effective technique for early breast cancer detection modality. Digital mammogram takes an electronic image of the breast and stores it directly in a computer. High quality mammogram images are high resolution and large size images. Processing these images require high computational capabilities. The transmission of these images over the net is sometimes critical especially if the diagnosis of remote radiologists is required. The aim of this study is to develop an automated system for assisting the analysis of digital mammograms. Computer image processing techniques will be applied to enhance images and this is followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI. The texture features will be used to classify the ROIs as either masses or non-masses.", "title": "" }, { "docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1", "text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.", "title": "" }, { "docid": "eacf295c0cbd52599a1567c6d4193007", "text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.", "title": "" }, { "docid": "2923652ff988572a40d682e2a459707a", "text": "Clustering analysis is a descriptive task that seeks to identify homogeneous groups of objects based on the values of their attributes. This paper proposes a new algorithm for K-medoids clustering which runs like the K-means algorithm and tests several methods for selecting initial medoids. The proposed algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step. We evaluate the proposed algorithm using real and artificial data and compare with the results of other algorithms. The proposed algorithm takes the reduced time in computation with comparable performance as compared to the Partitioning Around Medoids.", "title": "" }, { "docid": "1464f9d7a60a59bfdd6399ea6cd9fd99", "text": "Table of", "title": "" }, { "docid": "34f8765ca28666cfeb94e324882a71d6", "text": "We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents the main aspects related to Industry 4.0, the technologies that will enable this revolution, and the main application domains that will be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processes will be discussed. The main objectives will be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products’ quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies.", "title": "" } ]
scidocsrr
cfebf75dbb7549b5b5a59c2699d9fa6d
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels
[ { "docid": "5ca7e5a8770b931c070c51047ca99108", "text": "The symmetric positive definite (SPD) matrices have been widely used in image and vision problems. Recently there are growing interests in studying sparse representation (SR) of SPD matrices, motivated by the great success of SR for vector data. Though the space of SPD matrices is well-known to form a Lie group that is a Riemannian manifold, existing work fails to take full advantage of its geometric structure. This paper attempts to tackle this problem by proposing a kernel based method for SR and dictionary learning (DL) of SPD matrices. We disclose that the space of SPD matrices, with the operations of logarithmic multiplication and scalar logarithmic multiplication defined in the Log-Euclidean framework, is a complete inner product space. We can thus develop a broad family of kernels that satisfies Mercer's condition. These kernels characterize the geodesic distance and can be computed efficiently. We also consider the geometric structure in the DL process by updating atom matrices in the Riemannian space instead of in the Euclidean space. The proposed method is evaluated with various vision problems and shows notable performance gains over state-of-the-arts.", "title": "" }, { "docid": "6f484310532a757a28c427bad08f7623", "text": "We address the problem of tracking and recognizing faces in real-world, noisy videos. We track faces using a tracker that adaptively builds a target model reflecting changes in appearance, typical of a video setting. However, adaptive appearance trackers often suffer from drift, a gradual adaptation of the tracker to non-targets. To alleviate this problem, our tracker introduces visual constraints using a combination of generative and discriminative models in a particle filtering framework. The generative term conforms the particles to the space of generic face poses while the discriminative one ensures rejection of poorly aligned targets. This leads to a tracker that significantly improves robustness against abrupt appearance changes and occlusions, critical for the subsequent recognition phase. Identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. This leads to a robust video-based face recognizer with state-of-the-art recognition performance. We test the quality of tracking and face recognition on real-world noisy videos from YouTube as well as the standard Honda/UCSD database. Our approach produces successful face tracking results on over 80% of all videos without video or person-specific parameter tuning. The good tracking performance induces similarly high recognition rates: 100% on Honda/UCSD and over 70% on the YouTube set containing 35 celebrities in 1500 sequences.", "title": "" }, { "docid": "c41c56eeb56975c4d65e3847aa6b8b01", "text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency", "title": "" } ]
[ { "docid": "91c5ad5a327026a424454779f96da601", "text": "We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.", "title": "" }, { "docid": "b71ec61f22457a5604a1c46087685e45", "text": "Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than 4%, measured by the average Dice-Sørensen Coefficient (DSC). In addition, we report 62.43% DSC in the worst case, which guarantees the reliability of our approach in clinical applications.", "title": "" }, { "docid": "5824a316f20751183676850c119c96cd", "text": " Proposed method – Max-RGB & Gray-World • Instantiations of Minkowski norm – Optimal illuminant estimate • L6 norm: Working best overall", "title": "" }, { "docid": "c2fe102ed88b248434b51130d693caca", "text": "The Internet architecture uses congestion avoidance mechanisms implemented in the transport layer protocol like TCP to provide good service under heavy load. If network nodes distribute bandwidth fairly, the Internet would be more robust and accommodate a wide variety of applications. Various congestion and bandwidth management schemes have been proposed for this purpose and can be classified into two broad categories: packet scheduling algorithms such as fair queueing (FQ) which explicitly provide bandwidth shares by scheduling packets. They are more difficult to implement compared to FIFO queueing. The second category has active queue management schemes such as RED which use FIFO queues at the routers. They are easy to implement but don't aim to provide (and, in the presence of non-congestion-responsive sources, don't provide) fairness. An algorithm called AFD (approximate fair dropping), has been proposed to provide approximate, weighted max-min fair bandwidth allocations with relatively low complexity. AFD has since been widely adopted by the industry. This paper describes the evolution of AFD from a research project into an industry setting, focusing on the changes it has undergone in the process. AFD now serves as a traffic management module, which can be implemented either using a single FIFO or overlaid on top of extant per-flow queueing structures and which provides approximate bandwidth allocation in a simple fashion. The AFD algorithm has been implemented in several switch and router platforms at Cisco sytems, successfully transitioning from the academic world into the industry.", "title": "" }, { "docid": "46baa51f8c36c9d913bc9ece46aa1919", "text": "Radio frequency identification (RFID) has been identified as a crucial technology for the modern 21 st century knowledge-based economy. Many businesses started realising RFID to be able to improve their operational efficiency, achieve additional cost savings, and generate opportunities for higher revenues. To investigate how RFID technology has brought an impact to warehousing, a comprehensive analysis of research findings available through leading scientific article databases was conducted. Articles from years 1995 to 2010 were reviewed and analysed according to warehouse operations, RFID application domains, and benefits achieved. This paper presents four discussion topics covering RFID innovation, including its applications, perceived benefits, obstacles to its adoption and future trends. This is aimed at elucidating the current state of RFID in the warehouse and giving insights for the academics to establish new research scope and for the practitioners to evaluate their assessment of adopting RFID in the warehouse.", "title": "" }, { "docid": "723aeab499abebfec38bfd8cf8484293", "text": "Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences. However, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. Here we propose GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far. In order to quantitatively evaluate the performance of GraphRNN, we introduce a benchmark suite of datasets, baselines and novel evaluation metrics based on Maximum Mean Discrepancy, which measure distances between sets of graphs. Our experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50× larger than previous deep models.", "title": "" }, { "docid": "729d9802488a45d889d891257738a65b", "text": " Abstract— In this paper is presented an investigation of the speech recognition classification performance. This investigation on the speech recognition classification performance is performed using two standard neural networks structures as the classifier. The utilized standard neural network types include Feed-forward Neural Network (NN) with back propagation algorithm and a Radial Basis Functions Neural Networks.", "title": "" }, { "docid": "b9ad751e5b7e46fd79848788b10d7ab9", "text": "In this paper, we propose a cross-lingual convolutional neural network (CNN) model that is based on word and phrase embeddings learned from unlabeled data in two languages and dependency grammar. Compared to traditional machine translation (MT) based methods for cross lingual sentence modeling, our model is much simpler and does not need parallel corpora or language specific features. We only use a bilingual dictionary and dependency parser. This makes our model particularly appealing for resource poor languages. We evaluate our model using English and Chinese data on several sentence classification tasks. We show that our model achieves a comparable and even better performance than the traditional MT-based method.", "title": "" }, { "docid": "52b5fa0494733f2f6b72df0cdfad01f4", "text": "Requirements engineering encompasses many difficult, overarching problems inherent to its subareas of process, elicitation, specification, analysis, and validation. Requirements engineering researchers seek innovative, effective means of addressing these problems. One powerful tool that can be added to the researcher toolkit is that of machine learning. Some researchers have been experimenting with their own implementations of machine learning algorithms or with those available as part of the Weka machine learning software suite. There are some shortcomings to using “one off” solutions. It is the position of the authors that many problems exist in requirements engineering that can be supported by Weka's machine learning algorithms, specifically by classification trees. Further, the authors posit that adoption will be boosted if machine learning is easy to use and is integrated into requirements research tools, such as TraceLab. Toward that end, an initial concept validation of a component in TraceLab is presented that applies the Weka classification trees. The component is demonstrated on two different requirements engineering problems. Finally, insights gained on using the TraceLab Weka component on these two problems are offered.", "title": "" }, { "docid": "77278e6ba57e82c88f66bd9155b43a50", "text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.", "title": "" }, { "docid": "dfccff16f4600e8cc297296481e50b7b", "text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.", "title": "" }, { "docid": "6d6b844d89cd16196c27b70dec2bcd4d", "text": "Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms \"error\" and \"discrepancy\" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised.\n\n\nTEACHING POINTS\n• Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.", "title": "" }, { "docid": "fe98350e6fa6d91a2e63dc19646a0307", "text": "One of the most widely studied systems of argumentation is the one described by Dung in a paper from 1995. Unfortunately, this framework does not allow for joint attacks on arguments, which we argue must be required of any truly abstract argumentation framework. A few frameworks can be said to allow for such interactions among arguments, but for various reasons we believe that these are inadequate for modelling argumentation systems with joint attacks. In this paper we propose a generalization of the framework of Dung, which allows for sets of arguments to attack other arguments. We extend the semantics associated with the original framework to this generalization, and prove that all results in the paper by Dung have an equivalent in this more abstract framework.", "title": "" }, { "docid": "355720b7bbdc6d6d30987fc0dad5602e", "text": "To assess the likelihood of procedural success in patients with multivessel coronary disease undergoing percutaneous coronary angioplasty, 350 consecutive patients (1,100 stenoses) from four clinical sites were evaluated. Eighteen variables characterizing the severity and morphology of each stenosis and 18 patient-related variables were assessed at a core angiographic laboratory and at the clinical sites. Most patients had Canadian Cardiovascular Society class III or IV angina (72%) and two-vessel coronary disease (78%). Left ventricular function was generally well preserved (mean ejection fraction, 58 +/- 12%; range, 18-85%) and 1.9 +/- 1.0 stenoses per patient had attempted percutaneous coronary angioplasty. Procedural success (less than or equal to 50% final diameter stenosis in one or more stenoses and no major ischemic complications) was achieved in 290 patients (82.8%), and an additional nine patients (2.6%) had a reduction in diameter stenosis by 20% or more with a final diameter stenosis 51-60% and were without major complications. Major ischemic complications (death, myocardial infarction, or emergency bypass surgery) occurred in 30 patients (8.6%). In-hospital mortality was 1.1%. Stepwise regression analysis determined that a modified American College of Cardiology/American Heart Association Task Force (ACC/AHA) classification of the primary target stenosis (with type B prospectively divided into type B1 [one type B characteristic] and type B2 [greater than or equal to two type B characteristics]) and the presence of diabetes mellitus were the only variables independently predictive of procedural outcome (target stenosis modified ACC/AHA score; p less than 0.001 for both success and complications; diabetes mellitus: p = 0.003 for success and p = 0.016 for complications). Analysis of success and complications on a per stenosis dilated basis showed, for type A stenoses, a 92% success and a 2% complication rate; for type B1 stenoses, an 84% success and a 4% complication rate; for type B2 stenoses, a 76% success and a 10% complication rate; and for type C stenoses, a 61% success and a 21% complication rate. The subdivision into types B1 and B2 provided significantly more information in this clinically important intermediate risk group than did the standard ACC/AHA scheme. The stenosis characteristics of chronic total occlusion, high grade (80-99% diameter) stenosis, stenosis bend of more than 60 degrees, and excessive tortuosity were particularly predictive of adverse procedural outcome. This improved scheme may improve clinical decision making and provide a framework on which to base meaningful subgroup analysis in randomized trials assessing the efficacy of percutaneous coronary angioplasty.", "title": "" }, { "docid": "69c8584255b16e6bc05fdfc6510d0dc4", "text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.", "title": "" }, { "docid": "274829e884c6ba5f425efbdce7604108", "text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.", "title": "" }, { "docid": "b25e35dd703d19860bbbd8f92d80bd26", "text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.", "title": "" }, { "docid": "dcf9cba8bf8e2cc3f175e63e235f6b81", "text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.", "title": "" }, { "docid": "6dcb885d26ca419925a094ade17a4cf7", "text": "This paper presents two different Ku-Band Low-Profile antenna concepts for Mobile Satellite Communications. The antennas are based on low-cost hybrid mechanical-electronic steerable solutions but, while the first one allows a broadband reception of a satellite signal (Receive-only antenna concept), the second one provides transmit and receive functions for a bi-directional communication link between the satellite and the mobile user terminal (Transmit-Receive antenna). Both examples are suitable for integration in land vehicles and aircrafts.", "title": "" }, { "docid": "33084a3b41e8932b4dfaba5825d469e4", "text": "OBJECTIVE\nBecause adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media.\n\n\nMETHODS AND MATERIALS\nWe develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter).\n\n\nRESULTS\nWhen investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines.\n\n\nCONCLUSIONS\nOur experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness.", "title": "" } ]
scidocsrr
afb4607b5e8407b9632844376d5681f5
Turbo and Turbo-Like Codes: Principles and Applications in Telecommunications
[ { "docid": "48fde3a2cd8781ce675ce116ed8ee861", "text": "DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.", "title": "" } ]
[ { "docid": "901174e2dd911afada2e8ccf245d25f3", "text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.", "title": "" }, { "docid": "d7310e830f85541aa1d4b94606c1be0c", "text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.", "title": "" }, { "docid": "0387b6a593502a9c74ee62cd8eeec886", "text": "Recently, very deep networks, with as many as hundreds of layers, have shown great success in image classification tasks. One key component that has enabled such deep models is the use of “skip connections”, including either residual or highway connections, to alleviate the vanishing and exploding gradient problems. While these connections have been explored for speech, they have mainly been explored for feed-forward networks. Since recurrent structures, such as LSTMs, have produced state-of-the-art results on many of our Voice Search tasks, the goal of this work is to thoroughly investigate different approaches to adding depth to recurrent structures. Specifically, we experiment with novel Highway-LSTM models with bottlenecks skip connections and show that a 10 layer model can outperform a state-of-the-art 5 layer LSTM model with the same number of parameters by 2% relative WER. In addition, we experiment with Recurrent Highway layers and find these to be on par with Highway-LSTM models, when given sufficient depth.", "title": "" }, { "docid": "d05a179a28cab9cb47be0638ae7b525c", "text": "Ionizing radiation effects on CMOS image sensors (CIS) manufactured using a 0.18 mum imaging technology are presented through the behavior analysis of elementary structures, such as field oxide FET, gated diodes, photodiodes and MOSFETs. Oxide characterizations appear necessary to understand ionizing dose effects on devices and then on image sensors. The main degradations observed are photodiode dark current increases (caused by a generation current enhancement), minimum size NMOSFET off-state current rises and minimum size PMOSFET radiation induced narrow channel effects. All these effects are attributed to the shallow trench isolation degradation which appears much more sensitive to ionizing radiation than inter layer dielectrics. Unusual post annealing effects are reported in these thick oxides. Finally, the consequences on sensor design are discussed thanks to an irradiated pixel array and a comparison with previous work is discussed.", "title": "" }, { "docid": "3d846789f15f5a70cd36b45f00c6861a", "text": "Web-based businesses succeed by cultivating consumers' trust, starting with their beliefs, attitudes, intentions, and willingness to perform transactions at Web sites and with the organizations behind them.", "title": "" }, { "docid": "558abc8028d1d5b6956d2cf046efb983", "text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.", "title": "" }, { "docid": "0d30cfe8755f146ded936aab55cb80d3", "text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).", "title": "" }, { "docid": "19d6ad18011815602854685211847c52", "text": "This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation.", "title": "" }, { "docid": "2d5f6f0bd7ff91525fb130fd785ce281", "text": "Security flaws are open doors to attack embedded systems and must be carefully assessed in order to determine threats to safety and security. Subsequently securing a system, that is, integrating security mechanisms into the system's architecture can itself impact the system's safety, for instance deadlines could be missed due to an increase in computations and communications latencies. SysML-Sec addresses these issues with a model-driven approach that promotes the collaboration between system designers and security experts at all design and development stages, e.g., requirements, attacks, partitioning, design, and validation. A central point of SysML-Sec is its partitioning stage during which safety-related and security-related functions are explored jointly and iteratively with regards to requirements and attacks. Once partitioned, the system is designed in terms of system's functions and security mechanisms, and formally verified from both the safety and the security perspectives. Our paper illustrates the whole methodology with the evaluation of a security mechanism added to an existing automotive system.", "title": "" }, { "docid": "f785636331f737d8dc14b6958277553f", "text": "This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. Title and Abstract in Japanese 階層的部分単語特徴を用いたニューラル機械翻訳 本稿では、部分単語 (subword) を用いたニューラル機械翻訳 (Neural Machine Translation, NMT)に着目する。NMTモデルでは、エンコーダの埋め込み層、デコーダの埋め込み層お よびデコーダの出力層の 3箇所で部分単語が用いられるが、それぞれの層で適切な部分単 語単位は異なるという仮説を立てた。我々は、Sennrich et al. (2016)に基づく部分単語は、 大きな語彙集合が小さい語彙集合を必ず包含するという特徴を利用して、複数の異なる部 分単語列を同時に一つの埋め込み層として扱えるよう NMTモデルを改良する。以降、こ の小さな語彙集合特徴を階層的部分単語特徴と呼ぶ。本仮説を検証するために、様々な部 分単語単位や階層的部分単語特徴をエンコーダ・デコーダの埋め込み層に適用して、その 精度の変化を確認する。IWSLT評価セットを用いた実験により、エンコーダ側で階層的な 部分単語を用いたモデルは BLEUスコアが一貫して向上することが確認できた。", "title": "" }, { "docid": "7056b8e792a2bd1535cf020b2aeab2c7", "text": "The authors propose a theoretical model linking achievement goals and achievement emotions to academic performance. This model was tested in a prospective study with undergraduates (N 213), using exam-specific assessments of both goals and emotions as predictors of exam performance in an introductory-level psychology course. The findings were consistent with the authors’ hypotheses and supported all aspects of the proposed model. In multiple regression analysis, achievement goals (mastery, performance approach, and performance avoidance) were shown to predict discrete achievement emotions (enjoyment, boredom, anger, hope, pride, anxiety, hopelessness, and shame), achievement emotions were shown to predict performance attainment, and 7 of the 8 focal emotions were documented as mediators of the relations between achievement goals and performance attainment. All of these findings were shown to be robust when controlling for gender, social desirability, positive and negative trait affectivity, and scholastic ability. The results are discussed with regard to the underdeveloped literature on discrete achievement emotions and the need to integrate conceptual and applied work on achievement goals and achievement emotions.", "title": "" }, { "docid": "3e83d63920d7d8650a2eeaa2e68ec640", "text": "Antibiotic resistance consists of a dynamic web. In this review, we describe the path by which different antibiotic residues and antibiotic resistance genes disseminate among relevant reservoirs (human, animal, and environmental settings), evaluating how these events contribute to the current scenario of antibiotic resistance. The relationship between the spread of resistance and the contribution of different genetic elements and events is revisited, exploring examples of the processes by which successful mobile resistance genes spread across different niches. The importance of classic and next generation molecular approaches, as well as action plans and policies which might aid in the fight against antibiotic resistance, are also reviewed.", "title": "" }, { "docid": "7e941f9534357fca740b97a99e86f384", "text": "The head-direction (HD) cells found in the limbic system in freely mov ing rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be controlled accurately by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.", "title": "" }, { "docid": "7f2fcc4b4af761292d3f77ffd1a2f7c3", "text": "An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.", "title": "" }, { "docid": "c7f0856c282d1039e44ba6ef50948d32", "text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.", "title": "" }, { "docid": "dcf231b887d7caeec341850507561197", "text": "Convolutional neural networks (CNNs) have attracted increasing attention in the remote sensing community. Most CNNs only take the last fully-connected layers as features for the classification of remotely sensed images, discarding the other convolutional layer features which may also be helpful for classification purposes. In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification. To this end, the optimal fusing weights for different convolutional layers are learned from the data itself. In remotely sensed scenes, the objects of interest exhibit different scales in distinct scenes, and even a single scene may contain objects with different sizes. To address this issue, we select the CNN with spatial pyramid pooling (SPP-net) as the basic deep network, and further construct a multi-scale ADPM model to learn complementary information from multi-scale images. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods. Keywords—Convolutional neural network (CNN), adaptive deep pyramid matching (ADPM), convolutional features, multi-scale ensemble, remote-sensing scene classification.", "title": "" }, { "docid": "5b7f20103c99a93c46efe4575f012e7d", "text": "The availability of several Advanced Driver Assistance Systems has put a correspondingly large number of inexpensive, yet capable sensors on production vehicles. By combining this reality with expertise from the DARPA Grand and Urban Challenges in building autonomous driving platforms, we were able to design and develop an Autonomous Valet Parking (AVP) system on a 2006 Volkwagen Passat Wagon TDI using automotive grade sensors. AVP provides the driver with both convenience and safety benefits - the driver can leave the vehicle at the entrance of a parking garage, allowing the vehicle to navigate the structure, find a spot, and park itself. By leveraging existing software modules from the DARPA Urban Challenge, our efforts focused on developing a parking spot detector, a localization system that did not use GPS, and a back-in parking planner. This paper focuses on describing the design and development of the last two modules.", "title": "" }, { "docid": "ba6865dc3c93ac52c9f1050f159b9e1a", "text": "A review of various properties of ceramic-reinforced aluminium matrix composites is presented in this paper. The properties discussed include microstructural, optical, physical and mechanical behaviour of ceramic-reinforced aluminium matrix composites and effects of reinforcement fraction, particle size, heat treatment and extrusion process on these properties. The results obtained by many researchers indicated the uniform distribution of reinforced particles with localized agglomeration at some places, when the metal matrix composite was processed through stir casting method. The density, hardness, compressive strength and toughness increased with increasing reinforcement fraction; however, these properties may reduce in the presence of porosity in the composite material. The particle size of reinforcements affected the hardness adversely. Tensile strength and flexural strength were observed to be increased up to a certain reinforcement fraction in the composites, beyond which these were reduced. The mechanical properties of the composite materials were improved by either thermal treatment or extrusion process. Initiation and growth of fine microcracks leading to macroscopic failure, ductile failure of the aluminium matrix, combination of particle fracture and particle pull-out, overload failure under tension and brittle fracture were the failure mode and mechanisms, as observed by previous researchers, during fractography analysis of tensile specimens of ceramic-reinforced aluminium matrix composites.", "title": "" }, { "docid": "d74874cf15642c87c7de51e54275f9be", "text": "We used a three layer Convolutional Neural Network (CNN) to make move predictions in chess. The task was defined as a two-part classification problem: a piece-selector CNN is trained to score which white pieces should be made to move, and move-selector CNNs for each piece produce scores for where it should be moved. This approach reduced the intractable class space in chess by a square root. The networks were trained using 20,000 games consisting of 245,000 moves made by players with an ELO rating higher than 2000 from the Free Internet Chess Server. The piece-selector network was trained on all of these moves, and the move-selector networks trained on all moves made by the respective piece. Black moves were trained on by using a data augmentation to frame it as a move made by the", "title": "" }, { "docid": "9c6601360694b48c137ec2a974635106", "text": "This paper reports a novel deep architecture referred to as Maxout network In Network (MIN), which can enhance model discriminability and facilitate the process of information abstraction within the receptive field. The proposed network adopts the framework of the recently developed Network In Network structure, which slides a universal approximator, multilayer perceptron (MLP) with rectifier units, to exact features. Instead of MLP, we employ maxout MLP to learn a variety of piecewise linear activation functions and to mediate the problem of vanishing gradients that can occur when using rectifier units. Moreover, batch normalization is applied to reduce the saturation of maxout units by pre-conditioning the model and dropout is applied to prevent overfitting. Finally, average pooling is used in all pooling layers to regularize maxout MLP in order to facilitate information abstraction in every receptive field while tolerating the change of object position. Because average pooling preserves all features in the local patch, the proposed MIN model can enforce the suppression of irrelevant information during training. Our experiments demonstrated the state-of-the-art classification performance when the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and comparable performance for SVHN dataset.", "title": "" } ]
scidocsrr
7972e6dcf1d47bde9246d77993b8d733
Anchor-free distributed localization in sensor networks
[ { "docid": "0255ca668dee79af0cb314631cb5ab2d", "text": "Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like marine biology, requires that these nodes be very small, light, un-tethered and unobtrusive, imposing substantial restrictions on the amount of additional hardware that can be placed at each node. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the use of GPS(Global Positioning System) for all nodes in these networks. The problem of localization, i.e., determining where a given node is physically located in a network is a challenging one, and yet extremely crucial for many applications of very large device networks. It needs to be solved in the absence of GPS on all the nodes in outdoor environments. In this paper, we propose a simple connectivity-metric based method for localization in outdoor environments that makes use of the inherent radiofrequency(RF) communications capabilities of these devices. A fixed number of reference points in the network transmit periodic beacon signals. Nodes use a simple connectivity metric to infer proximity to a given subset of these reference points and then localize themselves to the centroid of the latter. The accuracy of localization is then dependent on the separation distance between two adjacent reference points and the transmission range of these reference points. Initial experimental results show that the accuracy for 90% of our data points is within one-third of the separation distance. Keywords—localization, radio, wireless, GPS-less, connectivity, sensor networks.", "title": "" }, { "docid": "ef5f1aa863cc1df76b5dc057f407c473", "text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.", "title": "" } ]
[ { "docid": "721d26f8ea042c2fb3a87255a69e85f5", "text": "The Time-Triggered Protocol (TTP), which is intended for use in distributed real-time control applications that require a high dependability and guaranteed timeliness, is discussed. It integrates all services that are required in the design of a fault-tolerant real-time system, such as predictable message transmission, message acknowledgment in group communication, clock synchronization, membership, rapid mode changes, redundancy management, and temporary blackout handling. It supports fault-tolerant configurations with replicated nodes and replicated communication channels. TTP provides these services with a small overhead so it can be used efficiently on twisted pair channels as well as on fiber optic networks.", "title": "" }, { "docid": "441633276271b94dc1bd3e5e28a1014d", "text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.", "title": "" }, { "docid": "56d9b47d1860b5a80c62da9f75b6769d", "text": "Optical see-through head-mounted displays (OSTHMDs) have many advantages in augmented reality application, but their utility in practical applications has been limited by the complexity of calibration. Because the human subject is an inseparable part of the eye-display system, previous methods for OSTHMD calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill. This paper describes display-relative calibration (DRC) for OSTHMDs, a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration. Phase I of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues. The second phase optimizes the display for a specific user and the placement of the display on the head. Several phase II alternatives provide flexibility in a variety of applications including applications involving untrained users.", "title": "" }, { "docid": "9cad72ab02778fa410a6bd1feb608283", "text": "Acoustic-based music recommender systems have received increasing interest in recent years. Due to the semantic gap between low level acoustic features and high level music concepts, many researchers have explored collaborative filtering techniques in music recommender systems. Traditional collaborative filtering music recommendation methods only focus on user rating information. However, there are various kinds of social media information, including different types of objects and relations among these objects, in music social communities such as Last.fm and Pandora. This information is valuable for music recommendation. However, there are two challenges to exploit this rich social media information: (a) There are many different types of objects and relations in music social communities, which makes it difficult to develop a unified framework taking into account all objects and relations. (b) In these communities, some relations are much more sophisticated than pairwise relation, and thus cannot be simply modeled by a graph. In this paper, we propose a novel music recommendation algorithm by using both multiple kinds of social media information and music acoustic-based content. Instead of graph, we use hypergraph to model the various objects and relations, and consider music recommendation as a ranking problem on this hypergraph. While an edge of an ordinary graph connects only two objects, a hyperedge represents a set of objects. In this way, hypergraph can be naturally used to model high-order relations. Experiments on a data set collected from the music social community Last.fm have demonstrated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "d0a765968e7cc4cf8099f66e0c3267da", "text": "We explore the lattice sphere packing representation of a multi-antenna system and the algebraic space-time (ST) codes. We apply the sphere decoding (SD) algorithm to the resulted lattice code. For the uncoded system, SD yields, with small increase in complexity, a huge improvement over the well-known V-BLAST detection algorithm. SD of algebraic ST codes exploits the full diversity of the coded multi-antenna system, and makes the proposed scheme very appealing to take advantage of the richness of the multi-antenna environment. The fact that the SD does not depend on the constellation size, gives rise to systems with very high spectral efficiency, maximum-likelihood performance, and low decoding complexity.", "title": "" }, { "docid": "b2124dfd12529c1b72899b9866b34d03", "text": "In today's world, the amount of stored information has been enormously increasing day by day which is generally in the unstructured form and cannot be used for any processing to extract useful information, so several techniques such as summarization, classification, clustering, information extraction and visualization are available for the same which comes under the category of text mining. Text Mining can be defined as a technique which is used to extract interesting information or knowledge from the text documents. Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values.", "title": "" }, { "docid": "9556a7f345a31989bff1ee85fc31664a", "text": "The neural basis of variation in human intelligence is not well delineated. Numerous studies relating measures of brain size such as brain weight, head circumference, CT or MRI brain volume to different intelligence test measures, with variously defined samples of subjects have yielded inconsistent findings with correlations from approximately 0 to 0.6, with most correlations approximately 0.3 or 0.4. The study of intelligence in relation to postmortem cerebral volume is not available to date. We report the results of such a study on 100 cases (58 women and 42 men) having prospectively obtained Full Scale Wechsler Adult Intelligence Scale scores. Ability correlated with cerebral volume, but the relationship depended on the realm of intelligence studied, as well as the sex and hemispheric functional lateralization of the subject. General verbal ability was positively correlated with cerebral volume and each hemisphere's volume in women and in right-handed men accounting for 36% of the variation in verbal intelligence. There was no evidence of such a relationship in non-right-handed men, indicating that at least for verbal intelligence, functional asymmetry may be a relevant factor in structure-function relationships in men, but not in women. In women, general visuospatial ability was also positively correlated with cerebral volume, but less strongly, accounting for approximately 10% of the variance. In men, there was a non-significant trend of a negative correlation between visuospatial ability and cerebral volume, suggesting that the neural substrate of visuospatial ability may differ between the sexes. Analyses of additional research subjects used as test cases provided support for our regression models. In men, visuospatial ability and cerebral volume were strongly linked via the factor of chronological age, suggesting that the well-documented decline in visuospatial intelligence with age is related, at least in right-handed men, to the decrease in cerebral volume with age. We found that cerebral volume decreased only minimally with age in women. This leaves unknown the neural substrate underlying the visuospatial decline with age in women. Body height was found to account for 1-4% of the variation in cerebral volume within each sex, leaving the basis of the well-documented sex difference in cerebral volume unaccounted for. With finer testing instruments of specific cognitive abilities and measures of their associated brain regions, it is likely that stronger structure-function relationships will be observed. Our results point to the need for responsibility in the consideration of the possible use of brain images as intelligence tests.", "title": "" }, { "docid": "d51f2c1b31d1cfb8456190745ff294f7", "text": "This paper presents the design and measured performance of a novel intermediate-frequency variable-gain amplifier for Wideband Code-Division Multiple Access (WCDMA) transmitters. A compensation technique for parasitic coupling is proposed which allows a high dynamic range of 77 dB to be attained at 400 MHz while using a single variable-gain stage. Temperature compensation and decibel-linear characteristic are achieved by means of a control circuit which provides a lower than /spl plusmn/1.5 dB gain error over full temperature and gain ranges. The device is fabricated in a 0.8-/spl mu/m 46 GHz f/sub T/ silicon bipolar technology and drains up to 6 mA from a 2.7-V power supply.", "title": "" }, { "docid": "a0a618a4c5e81dce26d095daea7668e2", "text": "We study the efficiency of deblocking algorithms for improving visual signals degraded by blocking artifacts from compression. Rather than using only the perceptually questionable PSNR, we instead propose a block-sensitive index, named PSNR-B, that produces objective judgments that accord with observations. The PSNR-B modifies PSNR by including a blocking effect factor. We also use the perceptually significant SSIM index, which produces results largely in agreement with PSNR-B. Simulation results show that the PSNR-B results in better performance for quality assessment of deblocked images than PSNR and a well-known blockiness-specific index.", "title": "" }, { "docid": "1e12a7de843a49f429ac490939f8267c", "text": "BACKGROUND\nThe preparation consisting of a head-fixed mouse on a spherical or cylindrical treadmill offers unique advantages in a variety of experimental contexts. Head fixation provides the mechanical stability necessary for optical and electrophysiological recordings and stimulation. Additionally, it can be combined with virtual environments such as T-mazes, enabling these types of recording during diverse behaviors.\n\n\nNEW METHOD\nIn this paper we present a low-cost, easy-to-build acquisition system, along with scalable computational methods to quantitatively measure behavior (locomotion and paws, whiskers, and tail motion patterns) in head-fixed mice locomoting on cylindrical or spherical treadmills.\n\n\nEXISTING METHODS\nSeveral custom supervised and unsupervised methods have been developed for measuring behavior in mice. However, to date there is no low-cost, turn-key, general-purpose, and scalable system for acquiring and quantifying behavior in mice.\n\n\nRESULTS\nWe benchmark our algorithms against ground truth data generated either by manual labeling or by simpler methods of feature extraction. We demonstrate that our algorithms achieve good performance, both in supervised and unsupervised settings.\n\n\nCONCLUSIONS\nWe present a low-cost suite of tools for behavioral quantification, which serve as valuable complements to recording and stimulation technologies being developed for the head-fixed mouse preparation.", "title": "" }, { "docid": "9395961b446f753060a7f7b88d27f933", "text": "The goal of this research paper is to summarise the literature on implementation of the Blockchain and similar digital ledger techniques in various other domains beyond its application to crypto-currency and to draw appropriate conclusions. Blockchain being relatively a new technology, a representative sample of research is presented, spanning over the last ten years, starting from the early work in this field. Different types of usage of Blockchain and other digital ledger techniques, their challenges, applications, security and privacy issues were investigated. Identifying the most propitious direction for future use of Blockchain beyond crypto-currency is the main focus of the review study. Blockchain (BC), the technology behind Bitcoin crypto-currency system, is considered to be essential for forming the backbone for ensuring enhanced security and privacy for various applications in many other domains including the Internet of Things (IoT) eco-system. International research is currently being conducted in both academia and industry applying Blockchain in varied domains. The Proof-of-Work (PoW) mathematical challenge ensures BC security by maintaining a digital ledger of transactions that is considered to be unalterable. Furthermore, BC uses a changeable", "title": "" }, { "docid": "a3fe3b92fe53109888b26bb03c200180", "text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.", "title": "" }, { "docid": "2d7458da22077bec73d24fc29fdc0f62", "text": "This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.", "title": "" }, { "docid": "a83bde310a2311fc8e045486a7961657", "text": "Radio frequency identification (RFID) of objects or people has become very popular in many services in industry, distribution logistics, manufacturing companies and goods flow systems. When RFID frequency rises into the microwave region, the tag antenna must be carefully designed to match the free space and to the following ASIC. In this paper, we present a novel folded dipole antenna with a very simple configuration. The required input impedance can be achieved easily by choosing suitable geometry parameters.", "title": "" }, { "docid": "a65d67cdd3206a99f91774ae983064b4", "text": "BACKGROUND\nIn recent years there has been a progressive rise in the number of asylum seekers and refugees displaced from their country of origin, with significant social, economic, humanitarian and public health implications. In this population, up-to-date information on the rate and characteristics of mental health conditions, and on interventions that can be implemented once mental disorders have been identified, are needed. This umbrella review aims at systematically reviewing existing evidence on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in adult and children asylum seekers and refugees resettled in low, middle and high income countries.\n\n\nMETHODS\nWe conducted an umbrella review of systematic reviews summarizing data on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in asylum seekers and/or refugees. Methodological quality of the included studies was assessed with the AMSTAR checklist.\n\n\nRESULTS\nThirteen reviews reported data on the prevalence of common mental disorders while fourteen reviews reported data on the efficacy of psychological or pharmacological interventions. Although there was substantial variability in prevalence rates, we found that depression and anxiety were at least as frequent as post-traumatic stress disorder, accounting for up to 40% of asylum seekers and refugees. In terms of psychosocial interventions, cognitive behavioral interventions, in particular narrative exposure therapy, were the most studied interventions with positive outcomes against inactive but not active comparators.\n\n\nCONCLUSIONS\nCurrent epidemiological data needs to be expanded with more rigorous studies focusing not only on post-traumatic stress disorder but also on depression, anxiety and other mental health conditions. In addition, new studies are urgently needed to assess the efficacy of psychosocial interventions when compared not only with no treatment but also each other. Despite current limitations, existing epidemiological and experimental data should be used to develop specific evidence-based guidelines, possibly by international independent organizations, such as the World Health Organization or the United Nations High Commission for Refugees. Guidelines should be applicable to different organizations of mental health care, including low and middle income countries as well as high income countries.", "title": "" }, { "docid": "6968d5646db3941b06d3763033cb8d45", "text": "Path prediction is useful in a wide range of applications. Most of the existing solutions, however, are based on eager learning methods where models and patterns are extracted from historical trajectories and then used for future prediction. Since such approaches are committed to a set of statistically significant models or patterns, problems can arise in dynamic environments where the underlying models change quickly or where the regions are not covered with statistically significant models or patterns.\n We propose a \"semi-lazy\" approach to path prediction that builds prediction models on the fly using dynamically selected reference trajectories. Such an approach has several advantages. First, the target trajectories to be predicted are known before the models are built, which allows us to construct models that are deemed relevant to the target trajectories. Second, unlike the lazy learning approaches, we use sophisticated learning algorithms to derive accurate prediction models with acceptable delay based on a small number of selected reference trajectories. Finally, our approach can be continuously self-correcting since we can dynamically re-construct new models if the predicted movements do not match the actual ones.\n Our prediction model can construct a probabilistic path whose probability of occurrence is larger than a threshold and which is furthest ahead in term of time. Users can control the confidence of the path prediction by setting a probability threshold. We conducted a comprehensive experimental study on real-world and synthetic datasets to show the effectiveness and efficiency of our approach.", "title": "" }, { "docid": "f812cbdea7f9a6827b799bfa2d7baf60", "text": "Most real-world dynamic systems are composed of different components that often evolve at very different rates. In traditional temporal graphical models, such as dynamic Bayesian networks, time is modeled at a fixed granularity, generally selected based on the rate at which the fastest component evolves. Inference must then be performed at this fastest granularity, potentially at significant computational cost. Continuous Time Bayesian Networks (CTBNs) avoid time-slicing in the representation by modeling the system as evolving continuously over time. The expectation-propagation (EP) inference algorithm of Nodelman et al. (2005) can then vary the inference granularity over time, but the granularity is uniform across all parts of the system, and must be selected in advance. In this paper, we provide a new EP algorithm that utilizes a general cluster graph architecture where clusters contain distributions that can overlap in both space (set of variables) and time. This architecture allows different parts of the system to be modeled at very different time granularities, according to their current rate of evolution. We also provide an information-theoretic criterion for dynamically re-partitioning the clusters during inference to tune the level of approximation to the current rate of evolution. This avoids the need to hand-select the appropriate granularity, and allows the granularity to adapt as information is transmitted across the network. We present experiments demonstrating that this approach can result in significant computational savings.", "title": "" }, { "docid": "0dc9f8f65efd02f16fea77d910fd73c7", "text": "The visual system is the most studied sensory pathway, which is partly because visual stimuli have rather intuitive properties. There are reasons to think that the underlying principle ruling coding, however, is the same for vision and any other type of sensory signal, namely the code has to satisfy some notion of optimality--understood as minimum redundancy or as maximum transmitted information. Given the huge variability of natural stimuli, it would seem that attaining an optimal code is almost impossible; however, regularities and symmetries in the stimuli can be used to simplify the task: symmetries allow predicting one part of a stimulus from another, that is, they imply a structured type of redundancy. Optimal coding can only be achieved once the intrinsic symmetries of natural scenes are understood and used to the best performance of the neural encoder. In this paper, we review the concepts of optimal coding and discuss the known redundancies and symmetries that visual scenes have. We discuss in depth the only approach which implements the three of them known so far: translational invariance, scale invariance and multiscaling. Not surprisingly, the resulting code possesses features observed in real visual systems in mammals.", "title": "" }, { "docid": "d91cb15eb4581c44c2f9f9a4ba67dfd1", "text": "BACKGROUND\nbeta-Blockade-induced benefit in heart failure (HF) could be related to baseline heart rate and treatment-induced heart rate reduction, but no such relationships have been demonstrated.\n\n\nMETHODS AND RESULTS\nIn CIBIS II, we studied the relationships between baseline heart rate (BHR), heart rate changes at 2 months (HRC), nature of cardiac rhythm (sinus rhythm or atrial fibrillation), and outcomes (mortality and hospitalization for HF). Multivariate analysis of CIBIS II showed that in addition to beta-blocker treatment, BHR and HRC were both significantly related to survival and hospitalization for worsening HF, the lowest BHR and the greatest HRC being associated with best survival and reduction of hospital admissions. No interaction between the 3 variables was observed, meaning that on one hand, HRC-related improvement in survival was similar at all levels of BHR, and on the other hand, bisoprolol-induced benefit over placebo for survival was observed to a similar extent at any level of both BHR and HRC. Bisoprolol reduced mortality in patients with sinus rhythm (relative risk 0.58, P:<0.001) but not in patients with atrial fibrillation (relative risk 1.16, P:=NS). A similar result was observed for cardiovascular mortality and hospitalization for HF worsening.\n\n\nCONCLUSIONS\nBHR and HRC are significantly related to prognosis in heart failure. beta-Blockade with bisoprolol further improves survival at any level of BHR and HRC and to a similar extent. The benefit of bisoprolol is questionable, however, in patients with atrial fibrillation.", "title": "" }, { "docid": "24ac33300d3ea99441068c20761e8305", "text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.", "title": "" } ]
scidocsrr
ee5cc702b6cd46fa7f2a31d83df996b2
Academic advising system using data mining method for decision making support
[ { "docid": "f7a36f939cbe9b1d403625c171491837", "text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.", "title": "" } ]
[ { "docid": "f06e1cd245863415531e65318c97f96b", "text": "In this paper, we propose a new joint dictionary learning method for example-based image super-resolution (SR), using sparse representation. The low-resolution (LR) dictionary is trained from a set of LR sample image patches. Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches. The error criterion used here is the mean square error. In this way we guarantee that the HR patches have the same sparse representation over HR dictionary as the LR patches over the LR dictionary, and at the same time, these sparse representations can well reconstruct the HR patches. Simulation results show the effectiveness of our method compared to the state-of-art SR algorithms.", "title": "" }, { "docid": "39b02ea486f16b0e09c79b7f4d792531", "text": "In this paper, we present the Functional Catalogue (FunCat), a hierarchically structured, organism-independent, flexible and scalable controlled classification system enabling the functional description of proteins from any organism. FunCat has been applied for the manual annotation of prokaryotes, fungi, plants and animals. We describe how FunCat is implemented as a highly efficient and robust tool for the manual and automatic annotation of genomic sequences. Owing to its hierarchical architecture, FunCat has also proved to be useful for many subsequent downstream bioinformatic applications. This is illustrated by the analysis of large-scale experiments from various investigations in transcriptomics and proteomics, where FunCat was used to project experimental data into functional units, as 'gold standard' for functional classification methods, and also served to compare the significance of different experimental methods. Over the last decade, the FunCat has been established as a robust and stable annotation scheme that offers both, meaningful and manageable functional classification as well as ease of perception.", "title": "" }, { "docid": "2cea3c0621b1ac332a6eb305661c077b", "text": "Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.", "title": "" }, { "docid": "676cdee75f9bb167d61017c22cf48496", "text": "Since the introduction of passive commercial capsule endoscopes, researchers have been pursuing methods to control and localize these devices, many utilizing magnetic fields [1, 2]. An advantage of magnetics is the ability to both actuate and localize using the same technology. Prior work from our group [3] developed a method to actuate screw-type magnetic capsule endoscopes in the intestines using a single rotating magnetic dipole located at any position with respect to the capsule. This paper presents a companion localization method that uses the same rotating dipole field for full 6-D pose estimation of a capsule endoscope embedded with a small permanet magnet and an array of magnetic-field sensors. Although several magnetic localization algorithms have been previously published, many are not compatible with magnetic actuation [4, 5]. Those that are require the addition of an accelerometer [6, 7], need a priori knowledge of the capsule’s orientation [7], provide only 3-D information [6], or must manipulate the position of the external magnetic source during localization [8, 9]. Kim et al. presented an iterative method for use with rotating magnetic fields, but the method contains errors [10]. Our proposed algorithm is less sensitive to data synchronization issues and sensor noise than our previous non-iterative method [11] because the data from the magnetic sensors is incorporated independently (rather than first using sensor data to estimate the field at the center of the capsule’s magnet), and the full pose is solved simultaneously (instead of position and orientation sequentially).", "title": "" }, { "docid": "024cebc81fb851a74957e9b15130f9f6", "text": "RATIONALE\nCardiac lipotoxicity, characterized by increased uptake, oxidation, and accumulation of lipid intermediates, contributes to cardiac dysfunction in obesity and diabetes mellitus. However, mechanisms linking lipid overload and mitochondrial dysfunction are incompletely understood.\n\n\nOBJECTIVE\nTo elucidate the mechanisms for mitochondrial adaptations to lipid overload in postnatal hearts in vivo.\n\n\nMETHODS AND RESULTS\nUsing a transgenic mouse model of cardiac lipotoxicity overexpressing ACSL1 (long-chain acyl-CoA synthetase 1) in cardiomyocytes, we show that modestly increased myocardial fatty acid uptake leads to mitochondrial structural remodeling with significant reduction in minimum diameter. This is associated with increased palmitoyl-carnitine oxidation and increased reactive oxygen species (ROS) generation in isolated mitochondria. Mitochondrial morphological changes and elevated ROS generation are also observed in palmitate-treated neonatal rat ventricular cardiomyocytes. Palmitate exposure to neonatal rat ventricular cardiomyocytes initially activates mitochondrial respiration, coupled with increased mitochondrial polarization and ATP synthesis. However, long-term exposure to palmitate (>8 hours) enhances ROS generation, which is accompanied by loss of the mitochondrial reticulum and a pattern suggesting increased mitochondrial fission. Mechanistically, lipid-induced changes in mitochondrial redox status increased mitochondrial fission by increased ubiquitination of AKAP121 (A-kinase anchor protein 121) leading to reduced phosphorylation of DRP1 (dynamin-related protein 1) at Ser637 and altered proteolytic processing of OPA1 (optic atrophy 1). Scavenging mitochondrial ROS restored mitochondrial morphology in vivo and in vitro.\n\n\nCONCLUSIONS\nOur results reveal a molecular mechanism by which lipid overload-induced mitochondrial ROS generation causes mitochondrial dysfunction by inducing post-translational modifications of mitochondrial proteins that regulate mitochondrial dynamics. These findings provide a novel mechanism for mitochondrial dysfunction in lipotoxic cardiomyopathy.", "title": "" }, { "docid": "c3531a47987db261fb9a6bb0bea3c4a3", "text": "We address the problem of making online, parallel query plans fault-tolerant: i.e., provide intra-query fault-tolerance without blocking. We develop an approach that not only achieves this goal but does so through the use of different fault-tolerance techniques at different operators within a query plan. Enabling each operator to use a different fault-tolerance strategy leads to a space of fault-tolerance plans amenable to cost-based optimization. We develop FTOpt, a cost-based fault-tolerance optimizer that automatically selects the best strategy for each operator in a query plan in a manner that minimizes the expected processing time with failures for the entire query. We implement our approach in a prototype parallel query-processing engine. Our experiments demonstrate that (1) there is no single best fault-tolerance strategy for all query plans, (2) often hybrid strategies that mix-and-match recovery techniques outperform any uniform strategy, and (3) our optimizer correctly identifies winning fault-tolerance configurations.", "title": "" }, { "docid": "1eba4ab4cb228a476987a5d1b32dda6c", "text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.", "title": "" }, { "docid": "f7aa61140a7f118ce2df44cf8dcc7cb3", "text": "Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads.\n However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases.\n In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.", "title": "" }, { "docid": "df8248b303c793b1f2c6231951e12aa4", "text": "Marfan syndrome is a connective-tissue disease inherited in an autosomal dominant manner and caused mainly by mutations in the gene FBN1. This gene encodes fibrillin-1, a glycoprotein that is the main constituent of the microfibrils of the extracellular matrix. Most mutations are unique and affect a single amino acid of the protein. Reduced or abnormal fibrillin-1 leads to tissue weakness, increased transforming growth factor β signaling, loss of cell–matrix interactions, and, finally, to the different phenotypic manifestations of Marfan syndrome. Since the description of FBN1 as the gene affected in patients with this disorder, great advances have been made in the understanding of its pathogenesis. The development of several mouse models has also been crucial to our increased understanding of this disease, which is likely to change the treatment and the prognosis of patients in the coming years. Among the many different clinical manifestations of Marfan syndrome, cardiovascular involvement deserves special consideration, owing to its impact on prognosis. However, the diagnosis of patients with Marfan syndrome should be made according to Ghent criteria and requires a comprehensive clinical assessment of multiple organ systems. Genetic testing can be useful in the diagnosis of selected cases.", "title": "" }, { "docid": "3ddf6fab70092eade9845b04dd8344a0", "text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "5f2aef6c79b4e03bfc4adcd5aa1d9e7c", "text": "Multiple sclerosis (MS) is a chronic inflammatory demyelinating disease of the central nervous system, which is heterogenous with respect to clinical manifestations and response to therapy. Identification of biomarkers appears desirable for an improved diagnosis of MS as well as for monitoring of disease activity and treatment response. MicroRNAs (miRNAs) are short non-coding RNAs, which have been shown to have the potential to serve as biomarkers for different human diseases, most notably cancer. Here, we analyzed the expression profiles of 866 human miRNAs. In detail, we investigated the miRNA expression in blood cells of 20 patients with relapsing-remitting MS (RRMS) and 19 healthy controls using a human miRNA microarray and the Geniom Real Time Analyzer (GRTA) platform. We identified 165 miRNAs that were significantly up- or downregulated in patients with RRMS as compared to healthy controls. The best single miRNA marker, hsa-miR-145, allowed discriminating MS from controls with a specificity of 89.5%, a sensitivity of 90.0%, and an accuracy of 89.7%. A set of 48 miRNAs that was evaluated by radial basis function kernel support vector machines and 10-fold cross validation yielded a specificity of 95%, a sensitivity of 97.6%, and an accuracy of 96.3%. While 43 of the 165 miRNAs deregulated in patients with MS have previously been related to other human diseases, the remaining 122 miRNAs are so far exclusively associated with MS. The implications of our study are twofold. The miRNA expression profiles in blood cells may serve as a biomarker for MS, and deregulation of miRNA expression may play a role in the pathogenesis of MS.", "title": "" }, { "docid": "41e10927206bebd484b1f137c89e31fe", "text": "Cable-driven parallel robots (CDPR) are efficient manipulators able to carry heavy payloads across large workspaces. Therefore, the dynamic parameters such as the mobile platform mass and center of mass location may considerably vary. Without any adaption, the erroneous parametric estimate results in mismatch terms added to the closed-loop system, which may decrease the robot performances. In this paper, we introduce an adaptive dual-space motion control scheme for CDPR. The proposed method aims at increasing the robot tracking performances, while keeping all the cable tensed despite uncertainties and changes in the robot dynamic parameters. Reel-time experimental tests, performed on a large redundantly actuated CDPR prototype, validate the efficiency of the proposed control scheme. These results are compared to those obtained with a non-adaptive dual-space feedforward control scheme.", "title": "" }, { "docid": "371be25b5ae618c599e551784641bbcb", "text": "The paper presents proposal of practical implementation simple IoT gateway based on Arduino microcontroller, dedicated to use in home IoT environment. Authors are concentrated on research of performance and security aspects of created system. By performed load tests and denial of service attack were investigated performance and capacity limits of implemented gateway.", "title": "" }, { "docid": "71cd341da48223745e0abc5aa9aded7b", "text": "MIMO is a technology that utilizes multiple antennas at transmitter/receiver to improve the throughput, capacity and coverage of wireless system. Massive MIMO where Base Station is equipped with orders of magnitude more antennas have shown over 10 times spectral efficiency increase over MIMO with simpler signal processing algorithms. Massive MIMO has benefits of enhanced capacity, spectral and energy efficiency and it can be built by using low cost and low power components. Despite its potential benefits, this paper also summarizes some challenges faced by massive MIMO such as antenna spatial correlation and mutual coupling as well as non-linear hardware impairments. These challenges encountered in massive MIMO uncover new problems that need further investigation.", "title": "" }, { "docid": "9b942a1342eb3c4fd2b528601fa42522", "text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.", "title": "" }, { "docid": "2757d2ab9c3fbc2eb01385771f297a71", "text": "In this brief, we propose a variable structure based nonlinear missile guidance/autopilot system with highly maneuverable actuators, mainly consisting of thrust vector control and divert control system, for the task of intercepting of a theater ballistic missile. The aim of the present work is to achieve bounded target interception under the mentioned 5 degree-of-freedom (DOF) control such that the distance between the missile and the target will enter the range of triggering the missile's explosion. First, a 3-DOF sliding-mode guidance law of the missile considering external disturbances and zero-effort-miss (ZEM) is designed to minimize the distance between the center of the missile and that of the target. Next, a quaternion-based sliding-mode attitude controller is developed to track the attitude command while coping with variation of missile's inertia and uncertain aerodynamic force/wind gusts. The stability of the overall system and ZEM-phase convergence are analyzed thoroughly via Lyapunov stability theory. Extensive simulation results are obtained to validate the effectiveness of the proposed integrated guidance/autopilot system by use of the 5-DOF inputs.", "title": "" }, { "docid": "60fa6928d67628eb0cc695a677a3f1c9", "text": "The assumption that there are innate integrative or actualizing tendencies underlying personality and social development is reexamined. Rather than viewing such processes as either nonexistent or as automatic, I argue that they are dynamic and dependent upon social-contextual supports pertaining to basic human psychological needs. To develop this viewpoint, I conceptually link the notion of integrative tendencies to specific developmental processes, namely intrinsic motivation; internalization; and emotional integration. These processes are then shown to be facilitated by conditions that fulfill psychological needs for autonomy, competence, and relatedness, and forestalled within contexts that frustrate these needs. Interactions between psychological needs and contextual supports account, in part, for the domain and situational specificity of motivation, experience, and relative integration. The meaning of psychological needs (vs. wants) is directly considered, as are the relations between concepts of integration and autonomy and those of independence, individualism, efficacy, and cognitive models of \"multiple selves.\"", "title": "" }, { "docid": "f10294ed332670587cf9c100f2d75428", "text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.", "title": "" }, { "docid": "ccff8afda7215d17de4fb6b9f01d6098", "text": "DB2 for Linux, UNIX, and Windows Version 9.1 introduces the Self-Tuning Memory Manager (STMM), which provides adaptive self tuning of both database memory heaps and cumulative database memory allocation. This technology provides state-of-the-art memory tuning combining control theory, runtime simulation modeling, cost-benefit analysis, and operating system resource analysis. In particular, the nove use of cost-benefit analysis and control theory techniques makes STMM a breakthrough technology in database memory management. The cost-benefit analysis allows STMM to tune memory between radically different memory consumers such as compiled statement cache, sort, and buffer pools. These methods allow for the fast convergence of memory settings while also providing stability in the presence of system noise. The tuning mode has been found in numerous experiments to tune memory allocation as well as expert human administrators, including OLTP, DSS, and mixed environments. We believe this is the first known use of cost-benefit analysis and control theory in database memory tuning across heterogeneous memory consumers.", "title": "" }, { "docid": "48b88774957a6d30ae9d0a97b9643647", "text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features", "title": "" } ]
scidocsrr
103788d6f36997cc1e6cd103155e537d
A survey of data mining techniques for analyzing crime patterns
[ { "docid": "f074965ee3a1d6122f1e68f49fd11d84", "text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.", "title": "" }, { "docid": "bbdb4a930ef77f91e8d76dd3a7e0f506", "text": "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone.", "title": "" } ]
[ { "docid": "3023637fd498bb183dae72135812c304", "text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common", "title": "" }, { "docid": "fe8c27e7ef05816cc4c4e2c68eeaf2f9", "text": "Chassis cavities have recently been proposed as a new mounting position for vehicular antennas. Cavities can be concealed and potentially offer more space for antennas than shark-fin modules mounted on top of the roof. An antenna cavity for the front or rear edge of the vehicle roof is designed, manufactured and measured for 5.9 GHz. The cavity offers increased radiation in the horizontal plane and to angles below horizon, compared to cavities located in the roof center.", "title": "" }, { "docid": "16c6e41746c451d66b43c5736f622cda", "text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.", "title": "" }, { "docid": "79798f4fbe3cffdf7c90cc5349bf0531", "text": "When a software system starts behaving abnormally during normal operations, system administrators resort to the use of logs, execution traces, and system scanners (e.g., anti-malwares, intrusion detectors, etc.) to diagnose the cause of the anomaly. However, the unpredictable context in which the system runs and daily emergence of new software threats makes it extremely challenging to diagnose anomalies using current tools. Host-based anomaly detection techniques can facilitate the diagnosis of unknown anomalies but there is no common platform with the implementation of such techniques. In this paper, we propose an automated anomaly detection framework (Total ADS) that automatically trains different anomaly detection techniques on a normal trace stream from a software system, raise anomalous alarms on suspicious behaviour in streams of trace data, and uses visualization to facilitate the analysis of the cause of the anomalies. Total ADS is an extensible Eclipse-based open source framework that employs a common trace format to use different types of traces, a common interface to adapt to a variety of anomaly detection techniques (e.g., HMM, sequence matching, etc.). Our case study on a modern Linux server shows that Total ADS automatically detects attacks on the server, shows anomalous paths in traces, and provides forensic insights.", "title": "" }, { "docid": "c7a9efee2b447cbadc149717ad7032ee", "text": "We introduce a novel method to learn a policy from unsupervised demonstrations of a process. Given a model of the system and a set of sequences of outputs, we find a policy that has a comparable performance to the original policy, without requiring access to the inputs of these demonstrations. We do so by first estimating the inputs of the system from observed unsupervised demonstrations. Then, we learn a policy by applying vanilla supervised learning algorithms to the (estimated)input-output pairs. For the input estimation, we present a new adaptive linear estimator (AdaL-IE) that explicitly trades-off variance and bias in the estimation. As we show empirically, AdaL-IE produces estimates with lower error compared to the state-of-the-art input estimation method, (UMV-IE) [Gillijns and De Moor, 2007]. Using AdaL-IE in conjunction with imitation learning enables us to successfully learn control policies that consistently outperform those using UMV-IE.", "title": "" }, { "docid": "7f0023af2f3df688aa58ae3317286727", "text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.", "title": "" }, { "docid": "34901b8e3e7667e3a430b70a02595f69", "text": "In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.", "title": "" }, { "docid": "dba1a222903031a6b3d064e6db29a108", "text": "Social engineering is a method of attack involving the exploitation of human weakness, gullibility and ignorance. Although related techniques have existed for some time, current awareness of social engineering and its many guises is relatively low and efforts are therefore required to improve the protection of the user community. This paper begins by examining the problems posed by social engineering, and outlining some of the previous efforts that have been made to address the threat. This leads toward the discussion of a new awareness-raising website that has been specifically designed to aid users in understanding and avoiding the risks. Findings from an experimental trial involving 46 participants are used to illustrate that the system served to increase users’ understanding of threat concepts, as well as providing an engaging environment in which they would be likely to persevere with their learning.", "title": "" }, { "docid": "fa0eebbf9c97942a5992ed80fd66cf10", "text": "The increasing popularity of Facebook among adolescents has stimulated research to investigate the relationship between Facebook use and loneliness, which is particularly prevalent in adolescence. The aim of the present study was to improve our understanding of the relationship between Facebook use and loneliness. Specifically, we examined how Facebook motives and two relationship-specific forms of adolescent loneliness are associated longitudinally. Cross-lagged analysis based on data from 256 adolescents (64% girls, M(age) = 15.88 years) revealed that peer-related loneliness was related over time to using Facebook for social skills compensation, reducing feelings of loneliness, and having interpersonal contact. Facebook use for making new friends reduced peer-related loneliness over time, whereas Facebook use for social skills compensation increased peer-related loneliness over time. Hence, depending on adolescents' Facebook motives, either the displacement or the stimulation hypothesis is supported. Implications and suggestions for future research are discussed.", "title": "" }, { "docid": "ff14cc28a72827c14aba42f3a036a088", "text": "Employees’ failure to comply with IS security procedures is a key concern for organizations today. A number of socio-cognitive theories have been used to explain this. However, prior studies have not examined the influence of past and automatic behavior on employee decisions to comply. This is an important omission because past behavior has been assumed to strongly affect decision-making. To address this gap, we integrated habit (a routinized form of past behavior) with Protection Motivation Theory (PMT), to explain compliance. An empirical test showed that habitual IS security compliance strongly reinforced the cognitive processes theorized by PMT, as well as employee intention for future compliance. We also found that nearly all components of PMT significantly impacted employee intention to comply with IS security policies. Together, these results highlighted the importance of addressing employees’ past and automatic behavior in order to improve compliance. 2012 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +1 801 361 2531; fax: +1 509 275 0886. E-mail addresses: anthony@vance.name (A. Vance), mikko.siponen@oulu.fi (M. Siponen), seppo.pahnila@oulu.fi (S. Pahnila). URL: http://www.anthonyvance.com 1 http://www.issrc.oulu.fi/.", "title": "" }, { "docid": "03d41408da6babfc97399c64860f50cd", "text": "The nine degrees-of-freedom (DOF) inertial measurement units (IMU) are generally composed of three kinds of sensor: accelerometer, gyroscope and magnetometer. The calibration of these sensor suites not only requires turn-table or purpose-built fixture, but also entails a complex and laborious procedure in data sampling. In this paper, we propose a method to calibrate a 9-DOF IMU by using a set of casually sampled raw sensor measurement. Our sampling procedure allows the sensor suite to move by hand and only requires about six minutes of fast and slow arbitrary rotations with intermittent pauses. It requires neither the specially-designed fixture and equipment, nor the strict sequences of sampling steps. At the core of our method are the techniques of data filtering and a hierarchical scheme for calibration. All the raw sensor measurements are preprocessed by a series of band-pass filters before use. And our calibration scheme makes use of the gravity and the ambient magnetic field as references, and hierarchically calibrates the sensor model parameters towards the minimization of the mis-alignment, scaling and bias errors. Moreover, the calibration steps are formulated as a series of function optimization problems and are solved by an evolutionary algorithm. Finally, the performance of our method is experimentally evaluated. The results show that our method can effectively calibrate the sensor model parameters from one set of raw sensor measurement, and yield consistent calibration results.", "title": "" }, { "docid": "8c0cbfc060b3a6aa03fd8305baf06880", "text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.", "title": "" }, { "docid": "198944af240d732b6fadcee273c1ba18", "text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.", "title": "" }, { "docid": "24f110f2b34e9da32fbd78ad242808bc", "text": "BACKGROUND\nSurvey research including multiple health indicators requires brief indices for use in cross-cultural studies, which have, however, rarely been tested in terms of their psychometric quality. Recently, the EUROHIS-QOL 8-item index was developed as an adaptation of the WHOQOL-100 and the WHOQOL-BREF. The aim of the current study was to test the psychometric properties of the EUROHIS-QOL 8-item index.\n\n\nMETHODS\nIn a survey on 4849 European adults, the EUROHIS-QOL 8-item index was assessed across 10 countries, with equal samples adjusted for selected sociodemographic data. Participants were also investigated with a chronic condition checklist, measures on general health perception, mental health, health-care utilization and social support.\n\n\nRESULTS\nFindings indicated good internal consistencies across a range of countries, showing acceptable convergent validity with physical and mental health measures, and the measure discriminates well between individuals that report having a longstanding condition and healthy individuals across all countries. Differential item functioning was less frequently observed in those countries that were geographically and culturally closer to the UK, but acceptable across all countries. A universal one-factor structure with a good fit in structural equation modelling analyses (SEM) was identified with, however, limitations in model fit for specific countires.\n\n\nCONCLUSIONS\nThe short EUROHIS-QOL 8-item index showed good cross-cultural field study performance and a satisfactory convergent and discriminant validity, and can therefore be recommended for use in public health research. In future studies the measure should also be tested in multinational clinical studies, particularly in order to test its sensitivity.", "title": "" }, { "docid": "1a7cfc19e7e3f9baf15e4a7450338c33", "text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.", "title": "" }, { "docid": "8b0870c8e975eeff8597eb342cd4f3f9", "text": "We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.", "title": "" }, { "docid": "a31287791b12f55adebacbb93a03c8bc", "text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.", "title": "" }, { "docid": "5a4d88bb879cf441808307961854c58c", "text": "Activity prediction is an essential task in practical human-centered robotics applications, such as security, assisted living, etc., which targets at inferring ongoing human activities based on incomplete observations. To address this challenging problem, we introduce a novel bio-inspired predictive orientation decomposition (BIPOD) approach to construct representations of people from 3D skeleton trajectories. Our approach is inspired by biological research in human anatomy. In order to capture spatio-temporal information of human motions, we spatially decompose 3D human skeleton trajectories and project them onto three anatomical planes (i.e., coronal, transverse and sagittal planes); then, we describe short-term time information of joint motions and encode high-order temporal dependencies. By estimating future skeleton trajectories that are not currently observed, we endow our BIPOD representation with the critical predictive capability. Empirical studies validate that our BIPOD approach obtains promising performance, in terms of accuracy and efficiency, using a physical TurtleBot2 robotic platform to recognize ongoing human activities. Experiments on benchmark datasets further demonstrate that our new BIPOD representation significantly outperforms previous approaches for real-time activity classification and prediction from 3D human skeleton trajectories.", "title": "" }, { "docid": "5ebddfaac62ec66171b65a776c1682b7", "text": "We investigated the reliability of a test assessing quadriceps strength, endurance and fatigability in a single session. We used femoral nerve magnetic stimulation (FMNS) to distinguish central and peripheral factors of neuromuscular fatigue. We used a progressive incremental loading with multiple assessments to limit the influence of subject's cooperation and motivation. Twenty healthy subjects (10 men and 10 women) performed the test on two different days. Maximal voluntary strength and evoked quadriceps responses via FMNS were measured before, after each set of 10 submaximal isometric contractions (5-s on/5-s off; starting at 10% of maximal voluntary strength with 10% increments), immediately and 30min after task failure. The test induced progressive peripheral (41±13% reduction in single twitch at task failure) and central fatigue (3±7% reduction in voluntary activation at task failure). Good inter-day reliability was found for the total number of submaximal contractions achieved (i.e. endurance index: ICC=0.83), for reductions in maximal voluntary strength (ICC>0.81) and evoked muscular responses (i.e. fatigue index: ICC>0.85). Significant sex-differences were also detected. This test shows good reliability for strength, endurance and fatigability assessments. Further studies should be conducted to evaluate its feasibility and reliability in patients.", "title": "" } ]
scidocsrr
c3a67924b943b0a1671f266cf8d42406
Hybrid CPU-GPU Framework for Network Motifs
[ { "docid": "777d4e55f3f0bbb0544130931006b237", "text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.", "title": "" } ]
[ { "docid": "b9b6fc972d887f64401ec77e3ca1e49b", "text": "We select a menu of seven popular decision theories and embed each theory in five models of stochastic choice, including tremble, Fechner and random utility model. We find that the estimated parameters of decision theories differ significantly when theories are combined with different models. Depending on the selected model of stochastic choice we obtain different rankings of decision theories with regard to their goodness of fit to the data. The fit of all analyzed decision theories improves significantly when they are embedded in a Fechner model of heteroscedastic truncated errors or a random utility model. Copyright  2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "cf751df3c52306a106fcd00eef28b1a4", "text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.", "title": "" }, { "docid": "141c28bfbeb5e71dc68d20b6220794c3", "text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.", "title": "" }, { "docid": "083d5b88cc1bf5490a0783a4a94e9fb2", "text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.", "title": "" }, { "docid": "f3a7e0f63d85c069e3f2ab75dcedc671", "text": "The commit processing in a Distributed Real Time Database (DRTDBS) can significantly increase execution time of a transaction. Therefore, designing a good commit protocol is important for the DRTDBS; the main challenge is the adaptation of standard commit protocol into the real time database system and so, decreasing the number of missed transaction in the systems. In these papers we review the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS. We propose a new commit protocol for reducing the number of transaction that missing their deadline. Keywords— DRTDBS, Commit protocols, Commit processing, 2PC protocol, 3PC protocol, Missed Transaction, Abort Transaction.", "title": "" }, { "docid": "711ad6f6641b916f25f08a32d4a78016", "text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "74a9612c1ca90a9d7b6152d19af53d29", "text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.", "title": "" }, { "docid": "5398b76e55bce3c8e2c1cd89403b8bad", "text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that", "title": "" }, { "docid": "cb3d1448269b29807dc62aa96ff6ad1a", "text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "f38709ee76dd9988b36812a7801f7336", "text": "BACKGROUND\nMost individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment.\n\n\nMETHODS\nThe Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments.\n\n\nRESULTS\nSeveral principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care.\n\n\nCONCLUSIONS\nEfficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.", "title": "" }, { "docid": "af12993c21eb626a7ab8715da1f608c9", "text": "Today, both the military and commercial sectors are placing an increased emphasis on global communications. This has prompted the development of several low earth orbit satellite systems that promise worldwide connectivity and real-time voice communications. This article provides a tutorial overview of the IRIDIUM low earth orbit satellite system and performance results obtained via simulation. First, it presents an overview of key IRIDIUM design parameters and features. Then, it examines the issues associated with routing in a dynamic network topology, focusing on network management and routing algorithm selection. Finally, it presents the results of the simulation and demonstrates that the IRIDIUM system is a robust system capable of meeting published specifications.", "title": "" }, { "docid": "f614df1c1775cd4e2a6927fce95ffa46", "text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR", "title": "" }, { "docid": "81291c707a102fac24a9d5ab0665238d", "text": "CAN bus is ISO international standard serial communication protocol. It is one of the most widely used fieldbus in the world. It has become the standard bus of embedded industrial control LAN. Ethernet is the most common communication protocol standard that is applied in the existing LAN. Networked industrial control usually adopts fieldbus and Ethernet network, thus the protocol conversion problems of the heterogeneous network composed of Ethernet and CAN bus has become one of the research hotspots in the technology of the industrial control network. STM32F103RC ARM microprocessor was used in the design of the Ethernet-CAN protocol conversion module, the simplified TCP/IP communication protocol uIP protocol was adopted to improve the efficiency of the protocol conversion and guarantee the stability of the system communication. The results of the experiments show that the designed module can realize high-speed and transparent protocol conversion.", "title": "" }, { "docid": "32744d62b45f742cdab55ab462670a39", "text": "The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm’s physical motional behaviors. Keywords—Lynx 6, robot arm, forward kinematics, inverse kinematics, software, DH parameters, 5 DOF ,SSC-32 , simulator.", "title": "" }, { "docid": "189d0b173f8a9e0b3deb21398955dc3c", "text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.", "title": "" }, { "docid": "361dc8037ebc30cd2f37f4460cf43569", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" }, { "docid": "822e6c57ea2bbb53d43e44cf1bda8833", "text": "The investigators proposed that transgression-related interpersonal motivations result from 3 psychological parameters: forbearance (abstinence from avoidance and revenge motivations, and maintenance of benevolence), trend forgiveness (reductions in avoidance and revenge, and increases in benevolence), and temporary forgiveness (transient reductions in avoidance and revenge, and transient increases in benevolence). In 2 studies, the investigators examined this 3-parameter model. Initial ratings of transgression severity and empathy were directly related to forbearance but not trend forgiveness. Initial responsibility attributions were inversely related to forbearance but directly related to trend forgiveness. When people experienced high empathy and low responsibility attributions, they also tended to experience temporary forgiveness. The distinctiveness of each of these 3 parameters underscores the importance of studying forgiveness temporally.", "title": "" }, { "docid": "0eff5b8ec08329b4a5d177baab1be512", "text": "In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.", "title": "" } ]
scidocsrr
09b8b665207ac2583f3c98d2a41e26fc
NewsCube: delivering multiple aspects of news to mitigate media bias
[ { "docid": "7f05bd51c98140417ff73ec2d4420d6a", "text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.", "title": "" }, { "docid": "212536baf7f5bd2635046774436e0dbf", "text": "Mobile devices have already been widely used to access the Web. However, because most available web pages are designed for desktop PC in mind, it is inconvenient to browse these large web pages on a mobile device with a small screen. In this paper, we propose a new browsing convention to facilitate navigation and reading on a small-form-factor device. A web page is organized into a two level hierarchy with a thumbnail representation at the top level for providing a global view and index to a set of sub-pages at the bottom level for detail information. A page adaptation technique is also developed to analyze the structure of an existing web page and split it into small and logically related units that fit into the screen of a mobile device. For a web page not suitable for splitting, auto-positioning or scrolling-by-block is used to assist the browsing as an alterative. Our experimental results show that our proposed browsing convention and developed page adaptation scheme greatly improve the user's browsing experiences on a device with a small display.", "title": "" } ]
[ { "docid": "5029feaec44e80561efef4b97c435896", "text": "Conceptual blending has been proposed as a creative cognitive process, but most theories focus on the analysis of existing blends rather than mechanisms for the efficient construction of novel blends. While conceptual blending is a powerful model for creativity, there are many challenges related to the computational application of blending. Inspired by recent theoretical research, we argue that contexts and context-induced goals provide insights into algorithm design for creative systems using conceptual blending. We present two case studies of creative systems that use goals and contexts to efficiently produce novel, creative artifacts in the domains of story generation and virtual characters engaged in pretend play respectively.", "title": "" }, { "docid": "fb00601b60bcd1f7a112e34d93d55d01", "text": "Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a wide range of tasks. Its outstanding performance is guaranteed by the long-term memory ability which matches the sequential data perfectly and the gating structure controlling the information flow. However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing. To tackle this problem, various efficient model compression methods have been proposed. Most of them need a big and expensive pre-trained model which is a nightmare for resource-limited devices where the memory budget is strictly limited. To remedy this situation, in this paper, we incorporate the Sparse Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM has a sparse topology and dramatically fewer parameters in both phases, training and inference. Considering the specific architecture of LSTMs, we replace the LSTM cells and embedding layers with sparse structures and further on, use an evolutionary strategy to adapt the sparse connectivity to the data. Additionally, we find that SET-LSTM can provide many different good combinations of sparse connectivity to substitute the overparameterized optimization problem of dense neural networks. Evaluated on four sentiment analysis classification datasets, the results demonstrate that our proposed model is able to achieve usually better performance than its fully connected counterpart while having less than 4% of its parameters. Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands. Correspondence to: Shiwei Liu <s.liu3@tue.nl>.", "title": "" }, { "docid": "881da6fd2d6c77d9f31ba6237c3d2526", "text": "Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.", "title": "" }, { "docid": "4ea8351c57e4581bfdab4c7cd357c90a", "text": "Hierarchies have long been used for organization, summarization, and access to information. In this paper we define summarization in terms of a probabilistic language model and use the definition to explore a new technique for automatically generating topic hierarchies by applying a graph-theoretic algorithm, which is an approximation of the Dominating Set Problem. The algorithm efficiently chooses terms according to a language model. We compare the new technique to previous methods proposed for constructing topic hierarchies including subsumption and lexical hierarchies, as well as the top TF.IDF terms. Our results show that the new technique consistently performs as well as or better than these other techniques. They also show the usefulness of hierarchies compared with a list of terms.", "title": "" }, { "docid": "59bab56cb454b05eb4f12db425f4d0ce", "text": "This study explores one of the contributors to group composition-the basis on which people choose others with whom they want to work. We use a combined model to explore individual attributes, relational attributes, and previous structural ties as determinants of work partner choice. Four years of data from participants in 33 small project groups were collected, some of which reflects individual participant characteristics and some of which is social network data measuring the previous relationship between two participants. Our results suggest that when selecting future group members people are biased toward others of the same race, others who have a reputation for being competent and hard working, and others with whom they have developed strong working relationships in the past. These results suggest that people strive for predictability when choosing future work group members. Copyright 2000 Academic Press.", "title": "" }, { "docid": "661c99429dc6684ca7d6394f01201ac3", "text": "SUMO is an open source traffic simulation package including net import and demand modeling components. We describe the current state of the package as well as future developments and extensions. SUMO helps to investigate several research topics e.g. route choice and traffic light algorithm or simulating vehicular communication. Therefore the framework is used in different projects to simulate automatic driving or traffic management strategies. Keywordsmicroscopic traffic simulation, software, open", "title": "" }, { "docid": "f177b129e4a02fe42084563a469dc47d", "text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.", "title": "" }, { "docid": "0907539385c59f9bd476b2d1fb723a38", "text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.", "title": "" }, { "docid": "5f3dc141b69eb50e17bdab68a2195e13", "text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.", "title": "" }, { "docid": "fe300167bce299523d20d063417e6d31", "text": "The n-gram language model, which has its roots in statistical natural language processing, has been shown to successfully capture the repetitive and predictable regularities (“naturalness\") of source code, and help with tasks such as code suggestion, porting, and designing assistive coding devices. However, we show in this paper that this natural-language-based model fails to exploit a special property of source code: localness. We find that human-written programs are localized: they have useful local regularities that can be captured and exploited. We introduce a novel cache language model that consists of both an n-gram and an added “cache\" component to exploit localness. We show empirically that the additional cache component greatly improves the n-gram approach by capturing the localness of software, as measured by both cross-entropy and suggestion accuracy. Our model’s suggestion accuracy is actually comparable to a state-of-the-art, semantically augmented language model; but it is simpler and easier to implement. Our cache language model requires nothing beyond lexicalization, and thus is applicable to all programming languages.", "title": "" }, { "docid": "65fd482ac37852214fc82b4bc05c6f72", "text": "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30% AUC.", "title": "" }, { "docid": "8cb33cec31601b096ff05426e5ffa848", "text": "Efficient actuation control of flapping-wing microrobots requires a low-power frequency reference with good absolute accuracy. To meet this requirement, we designed a fully-integrated 10MHz relaxation oscillator in a 40nm CMOS process. By adaptively biasing the continuous-time comparator, we are able to achieve a power consumption of 20μW, a 68% reduction to the conventional fixed bias design. A built-in self-calibration controller enables fast post-fabrication calibration of the clock frequency. Measurements show a frequency drift of 1.2% as the battery voltage changes from 3V to 4.1V.", "title": "" }, { "docid": "5be55ce7d8f97689bf54028063ba63d7", "text": "Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.", "title": "" }, { "docid": "d3afe3be6debe665f442367b17fa4e28", "text": "It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.", "title": "" }, { "docid": "792694fbea0e2e49a454ffd77620da47", "text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical", "title": "" }, { "docid": "db3d1a63d5505693bd6677e9b268e8d4", "text": "This paper presents a system for calibrating the extrinsic parameters and timing offsets of an array of cameras, 3-D lidars, and global positioning system/inertial navigation system sensors, without the requirement of any markers or other calibration aids. The aim of the approach is to achieve calibration accuracies comparable with state-of-the-art methods, while requiring less initial information about the system being calibrated and thus being more suitable for use by end users. The method operates by utilizing the motion of the system being calibrated. By estimating the motion each individual sensor observes, an estimate of the extrinsic calibration of the sensors is obtained. Our approach extends standard techniques for motion-based calibration by incorporating estimates of the accuracy of each sensor's readings. This yields a probabilistic approach that calibrates all sensors simultaneously and facilitates the estimation of the uncertainty in the final calibration. In addition, we combine this motion-based approach with appearance information. This gives an approach that requires no initial calibration estimate and takes advantage of all available alignment information to provide an accurate and robust calibration for the system. The new framework is validated with datasets collected with different platforms and different sensors' configurations, and compared with state-of-the-art approaches.", "title": "" }, { "docid": "cc12bd6dcd844c49c55f4292703a241b", "text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.", "title": "" }, { "docid": "b4c8ebb06c527c81e568c82afb2d4b6d", "text": "Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a welldefined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.", "title": "" }, { "docid": "485b4a75726109838b1b8ed377e68ece", "text": "Item recommendation is a personalized ranking task. To this end, many recommender systems optimize models with pairwise ranking objectives, such as the Bayesian Personalized Ranking (BPR). Using matrix Factorization (MF) - the most widely used model in recommendation - as a demonstration, we show that optimizing it with BPR leads to a recommender model that is not robust. In particular, we find that the resultant model is highly vulnerable to adversarial perturbations on its model parameters, which implies the possibly large error in generalization. To enhance the robustness of a recommender model and thus improve its generalization performance, we propose a new optimization framework, namely Adversarial Personalized Ranking (APR). In short, our APR enhances the pairwise ranking method BPR by performing adversarial training. It can be interpreted as playing a minimax game, where the minimization of the BPR objective function meanwhile defends an adversary, which adds adversarial perturbations on model parameters to maximize the BPR objective function. To illustrate how it works, we implement APR on MF by adding adversarial perturbations on the embedding vectors of users and items. Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR - by optimizing MF with APR, it outperforms BPR with a relative improvement of 11.2% on average and achieves state-of-the-art performance for item recommendation. Our implementation is available at: \\urlhttps://github.com/hexiangnan/adversarial_personalized_ranking.", "title": "" }, { "docid": "0b74c1fbfe8ad31d2c73c8db6ce8b411", "text": "To investigate fast human reaching movements in 3D, we asked 11 right-handed persons to catch a tennis ball while we tracked the movements of their arms. To ensure consistent trajectories of the ball, we used a catapult to throw the ball from three different positions. Tangential velocity profiles of the hand were in general bell-shaped and hand movements in 3D coincided with well known results for 2D point-to-point movements such as minimum jerk theory or the 2/3rd power law. Furthermore, two phases, consisting of fast reaching and slower fine movements at the end of hand placement could clearly be seen. The aim of this study was to find a way to generate human-like (catching) trajectories for a humanoid robot.", "title": "" } ]
scidocsrr
bac3f7c9d829ac0a042e0b35e95ff424
Type-2 fuzzy logic systems for temperature evaluation in ladle furnace
[ { "docid": "fdbca2e02ac52afd687331048ddee7d3", "text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.", "title": "" }, { "docid": "c4ccb674a07ba15417f09b81c1255ba8", "text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.", "title": "" }, { "docid": "20f43c14feaf2da1e8999403bf350855", "text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "e3f4add37a083f61feda8805478d0729", "text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.", "title": "" }, { "docid": "d9c514f3e1089f258732eef4a949fe55", "text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.", "title": "" }, { "docid": "2923ea4e17567b06b9d8e0e9f1650e55", "text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.", "title": "" }, { "docid": "bcd47a79eeb49a34253d3c0de236f768", "text": "This is the second of five papers in the child survival series. The first focused on continuing high rates of child mortality (over 10 million each year) from preventable causes: diarrhoea, pneumonia, measles, malaria, HIV/AIDS, the underlying cause of undernutrition, and a small group of causes leading to neonatal deaths. We review child survival interventions feasible for delivery at high coverage in low-income settings, and classify these as level 1 (sufficient evidence of effect), level 2 (limited evidence), or level 3 (inadequate evidence). Our results show that at least one level-1 intervention is available for preventing or treating each main cause of death among children younger than 5 years, apart from birth asphyxia, for which a level-2 intervention is available. There is also limited evidence for several other interventions. However, global coverage for most interventions is below 50%. If level 1 or 2 interventions were universally available, 63% of child deaths could be prevented. These findings show that the interventions needed to achieve the millennium development goal of reducing child mortality by two-thirds by 2015 are available, but that they are not being delivered to the mothers and children who need them.", "title": "" }, { "docid": "8d104169f3862bc7c54d5932024ed9f6", "text": "Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.", "title": "" }, { "docid": "77e2aac8b42b0b9263278280d867cb40", "text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.", "title": "" }, { "docid": "8c575ae46ac2969c19a841c7d9a8cb5a", "text": "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regressionbased approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector – Convolutional Experts Network (CEN) – that brings together the advantages of neural architectures and mixtures of experts in an end-toend framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as a local detector. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin, especially on challenging profile images.", "title": "" }, { "docid": "87cfc5cad31751fd89c68dc9557eb33f", "text": "his paper presents a low-voltage (LV) (1.0 V) and low-power (LP) (40 μW) inverter based operational transconductance amplifier (OTA) using FGMOS (Floating-Gate MOS) transistor and its application in Gm-C filters. The OTA was designed in a 0.18 μm CMOS process. The simulation results of the proposed OTA demonstrate an open loop gain of 30.2 dB and a unity gain frequency of 942 MHz. In this OTA, the relative tuning range of 50 is achieved. To demonstrate the use of the proposed OTA in practical circuits, the second-order filter was designed. The designed filter has a good tuning range from 100 kHz to 5.6 MHz which is suitable for the wireless specifications of Bluetooth (650 kHz), CDMA2000 (700 kHz) and Wideband CDMA (2.2 MHz). The active area occupied by the designed filter on the silicon is and the maximum power consumption of this filter is 160 μW.", "title": "" }, { "docid": "6018c84c0e5666b5b4615766a5bb98a9", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "0b0b313c16697e303522fef245d97ba8", "text": "The development of novel targeted therapies with acceptable safety profiles is critical to successful cancer outcomes with better survival rates. Immunotherapy offers promising opportunities with the potential to induce sustained remissions in patients with refractory disease. Recent dramatic clinical responses in trials with gene modified T cells expressing chimeric antigen receptors (CARs) in B-cell malignancies have generated great enthusiasm. This therapy might pave the way for a potential paradigm shift in the way we treat refractory or relapsed cancers. CARs are genetically engineered receptors that combine the specific binding domains from a tumor targeting antibody with T cell signaling domains to allow specifically targeted antibody redirected T cell activation. Despite current successes in hematological cancers, we are only in the beginning of exploring the powerful potential of CAR redirected T cells in the control and elimination of resistant, metastatic, or recurrent nonhematological cancers. This review discusses the application of the CAR T cell therapy, its challenges, and strategies for successful clinical and commercial translation.", "title": "" }, { "docid": "80a86ff7e26bb29cf919b22433f8b6b4", "text": "Despite the widespread acceptance and use of pornography, much remains unknown about the heterogeneity among consumers of pornography. Using a sample of 457 college students from a midwestern university in the United States, a latent profile analysis was conducted to identify unique classifications of pornography users considering motivations of pornography use, level of pornography use, age of user, degree of pornography acceptance, and religiosity. Results indicated three classes of pornography users: Porn Abstainers (n 1⁄4 285), Auto-Erotic Porn Users (n 1⁄4 85), and Complex Porn Users (n 1⁄4 87). These three classes of pornography use are carefully defined. The odds of membership in these three unique classes of pornography users was significantly distinguished by relationship status, selfesteem, and gender. These results expand what is known about pornography users by providing a more person-centered approach that is more nuanced in understanding pornography use. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit", "title": "" }, { "docid": "5c88fae140f343ae3002685ab96fd848", "text": "Function recovery is a critical step in many binary analysis and instrumentation tasks. Existing approaches rely on commonly used function prologue patterns to recognize function starts, and possibly epilogues for the ends. However, this approach is not robust when dealing with different compilers, compiler versions, and compilation switches. Although machine learning techniques have been proposed, the possibility of errors still limits their adoption. In this work, we present a novel function recovery technique that is based on static analysis. Evaluations have shown that we can produce very accurate results that are applicable to a wider set of applications.", "title": "" }, { "docid": "5c31ed81a9c8d6463ce93890e38ad7b5", "text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.", "title": "" }, { "docid": "1efeab8c3036ad5ec1b4dc63a857b392", "text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.", "title": "" }, { "docid": "efe74721de3eda130957ce26435375a3", "text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.", "title": "" }, { "docid": "a81e4b95dfaa7887f66066343506d35f", "text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.", "title": "" }, { "docid": "d80fc668073878c476bdf3997b108978", "text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system", "title": "" }, { "docid": "d8fc5a8bc075343b2e70a9b441ecf6e5", "text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.", "title": "" }, { "docid": "8c1e70cf4173f9fc48f36c3e94216f15", "text": "Deep learning methods often require large annotated data sets to estimate their high numbers of parameters, which is not practical for many robotic domains. One way to migitate this issue is to transfer features learned on large datasets to related tasks. In this work, we describe the perception system developed for the entry of team NimbRo Picking into the Amazon Picking Challenge 2016. Object detection and semantic Segmentation methods are adapted to the domain, including incorporation of depth measurements. To avoid the need for large training datasets, we make use of pretrained models whenever possible, e.g. CNNs pretrained on ImageNet, and the whole DenseCap captioning pipeline pretrained on the Visual Genome Dataset. Our system performed well at the APC 2016 and reached second and third places for the stow and pick tasks, respectively.", "title": "" }, { "docid": "1a8662362e51a8783795e4588f0462a8", "text": "Human body exposure to radiofrequency electromagnetic waves emitted from smart meters was assessed using various exposure configurations. Specific energy absorption rate distributions were determined using three anatomically realistic human models. Each model was assigned with age- and frequency-dependent dielectric properties representing a collection of age groups. Generalized exposure conditions involving standing and sleeping postures were assessed for a home area network operating at 868 and 2,450 MHz. The smart meter antenna was fed with 1 W power input which is an overestimation of what real devices typically emit (15 mW max limit). The highest observed whole body specific energy absorption rate value was 1.87 mW kg-1 , within the child model at a distance of 15 cm from a 2,450 MHz device. The higher values were attributed to differences in dimension and dielectric properties within the model. Specific absorption rate (SAR) values were also estimated based on power density levels derived from electric field strength measurements made at various distances from smart meter devices. All the calculated SAR values were found to be very small in comparison to International Commission on Non-Ionizing Radiation Protection limits for public exposure. Bioelectromagnetics. 39:200-216, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" } ]
scidocsrr
1b7bda7ff030aae3804d4ffdc515a6f6
Local-Global Vectors to Improve Unigram Terminology Extraction
[ { "docid": "5daa3e5ed4e26184e4d5c7b967fac58d", "text": "Keyphrase extraction from a given document is a difficult task that requires not only local statistical information but also extensive background knowledge. In this paper, we propose a graph-based ranking approach that uses information supplied by word embedding vectors as the background knowledge. We first introduce a weighting scheme that computes informativeness and phraseness scores of words using the information supplied by both word embedding vectors and local statistics. Keyphrase extraction is performed by constructing a weighted undirected graph for a document, where nodes represent words and edges are co-occurrence relations of two words within a defined window size. The weights of edges are computed by the afore-mentioned weighting scheme, and a weighted PageRank algorithm is used to compute final scores of words. Keyphrases are formed in post-processing stage using heuristics. Our work is evaluated on various publicly available datasets with documents of varying length. We show that evaluation results are comparable to the state-of-the-art algorithms, which are often typically tuned to a specific corpus to achieve the claimed results.", "title": "" } ]
[ { "docid": "b78f1e6a5e93c1ad394b1cade293829f", "text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing", "title": "" }, { "docid": "09b86e959a0b3fa28f9d3462828bbc31", "text": "Industry 4.0 has become more popular due to recent developments in cyber-physical systems, big data, cloud computing, and industrial wireless networks. Intelligent manufacturing has produced a revolutionary change, and evolving applications, such as product lifecycle management, are becoming a reality. In this paper, we propose and implement a manufacturing big data solution for active preventive maintenance in manufacturing environments. First, we provide the system architecture that is used for active preventive maintenance. Then, we analyze the method used for collection of manufacturing big data according to the data characteristics. Subsequently, we perform data processing in the cloud, including the cloud layer architecture, the real-time active maintenance mechanism, and the offline prediction and analysis method. Finally, we analyze a prototype platform and implement experiments to compare the traditionally used method with the proposed active preventive maintenance method. The manufacturing big data method used for active preventive maintenance has the potential to accelerate implementation of Industry 4.0.", "title": "" }, { "docid": "c54a5f1037fb998b0965b21ce95e5cd2", "text": "Feature selection and ensemble classification increase system efficiency and accuracy in machine learning, data mining and biomedical informatics. This research presents an analysis of the effect of removing irrelevant and redundant features with ensemble classifiers using two datasets from UCI machine learning repository. Accuracy and computational time were evaluated by four base classifiers; NaiveBayes, Multilayer Perceptron, Support Vector Machines and Decision Tree. Eliminating irrelevant features improves accuracy and reduces computational time while removing redundant features reduces computational time and reduces accuracy of the ensemble.", "title": "" }, { "docid": "c588af91f9a0c1ae59a355ce2145c424", "text": "Negative correlation learning (NCL) aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners’ outputs. Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters. To achieve a better solution, this paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles. The basis functions of the base models are generated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system. An analytical solution is derived for these parameters, where a cost function defined for NCL and the well-known least squares method are used. To examine the merits of our proposed algorithm, a comparative study is carried out with nine benchmark datasets. Results indicate that our approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency. Crown Copyright 2013 Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e601c68a6118139c1183ba4abd012183", "text": "Robert M. Golub, MD, Editor The JAMA Patient Page is a public service of JAMA. The information and recommendations appearing on this page are appropriate in most instances, but they are not a substitute for medical diagnosis. For specific information concerning your personal medical condition, JAMA suggests that you consult your physician. This page may be photocopied noncommercially by physicians and other health care professionals to share with patients. To purchase bulk reprints, call 312/464-0776. C H IL D H E A TH The Journal of the American Medical Association", "title": "" }, { "docid": "ba2e16103676fa57bc3ca841106d2d32", "text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.", "title": "" }, { "docid": "e4a59205189e8cca8a1aba704460f8ec", "text": "In this paper, we compare two methods for article summarization. The first method is mainly based on term-frequency, while the second method is based on ontology. We build an ontology database for analyzing the main topics of the article. After identifying the main topics and determining their relative significance, we rank the paragraphs based on the relevance between main topics and each individual paragraph. Depending on the ranks, we choose desired proportion of paragraphs as summary. Experimental results indicate that both methods offer similar accuracy in their selections of the paragraphs.", "title": "" }, { "docid": "989cdc80521e1c8761f733ad3ed49d79", "text": "The wide availability of sensing devices in the medical domain causes the creation of large and very large data sets. Hence, tasks as the classification in such data sets becomes more and more difficult. Deep Neural Networks (DNNs) are very effective in classification, yet finding the best values for their hyper-parameters is a difficult and time-consuming task. This paper introduces an approach to decrease execution times to automatically find good hyper-parameter values for DNN through Evolutionary Algorithms when classification task is faced. This decrease is obtained through the combination of two mechanisms. The former is constituted by a distributed version for a Differential Evolution algorithm. The latter is based on a procedure aimed at reducing the size of the training set and relying on a decomposition into cubes of the space of the data set attributes. Experiments are carried out on a medical data set about Obstructive Sleep Anpnea. They show that sub-optimal DNN hyper-parameter values are obtained in a much lower time with respect to the case where this reduction is not effected, and that this does not come to the detriment of the accuracy in the classification over the test set items.", "title": "" }, { "docid": "dacf2f44c3f8fc0931dceda7e4cb9bef", "text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.", "title": "" }, { "docid": "6307379eaab0db0726d791e38e533249", "text": "The present study aimed to examine the effectiveness of advertisements in enhancing consumers’ purchasing intention on Facebook in 2013. It is an applied study in terms of its goals, and a descriptive survey one in terms of methodology. The statistical population included all undergraduate students in Cypriot universities. An 11-item researcher-made questionnaire was used to compare and analyze the effectiveness of advertisements. Data analysis was carried out using SPSS17, the parametric statistical method of t-test, and the non-parametric Friedman test. The results of the study showed that Facebook advertising significantly affected brand image and brand equity, both of which factors contributed to a significant change in purchasing intention. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "de50fa9069ac6e9aefdb310efc44ed0e", "text": "We present an advanced and robust technology to realize 3D hollow plasmonic nanostructures which are tunable in size, shape, and layout. The presented architectures offer new and unconventional properties such as the realization of 3D plasmonic hollow nanocavities with high electric field confinement and enhancement, finely structured extinction profiles, and broad band optical absorption. The 3D nature of the devices can overcome intrinsic difficulties related to conventional architectures in a wide range of multidisciplinary applications.", "title": "" }, { "docid": "06d05d4cbfd443d45993d6cc98ab22cb", "text": "Genetic deficiency of ectodysplasin A (EDA) causes X-linked hypohidrotic ectodermal dysplasia (XLHED), in which the development of sweat glands is irreversibly impaired, an condition that can lead to life-threatening hyperthermia. We observed normal development of mouse fetuses with Eda mutations after they had been exposed in utero to a recombinant protein that includes the receptor-binding domain of EDA. We administered this protein intraamniotically to two affected human twins at gestational weeks 26 and 31 and to a single affected human fetus at gestational week 26; the infants, born in week 33 (twins) and week 39 (singleton), were able to sweat normally, and XLHED-related illness had not developed by 14 to 22 months of age. (Funded by Edimer Pharmaceuticals and others.).", "title": "" }, { "docid": "924eb275a1205dbf7907a58fc1cee5b6", "text": "BACKGROUND\nNutrient status of B vitamins, particularly folate and vitamin B-12, may be related to cognitive ageing but epidemiological evidence remains inconclusive.\n\n\nOBJECTIVE\nThe aim of this study was to estimate the association of serum folate and vitamin B-12 concentrations with cognitive function in middle-aged and older adults from three Central and Eastern European populations.\n\n\nMETHODS\nMen and women aged 45-69 at baseline participating in the Health, Alcohol and Psychosocial factors in Eastern Europe (HAPIEE) study were recruited in Krakow (Poland), Kaunas (Lithuania) and six urban centres in the Czech Republic. Tests of immediate and delayed recall, verbal fluency and letter search were administered at baseline and repeated in 2006-2008. Serum concentrations of biomarkers at baseline were measured in a sub-sample of participants. Associations of vitamin quartiles with baseline (n=4166) and follow-up (n=2739) cognitive domain-specific z-scores were estimated using multiple linear regression.\n\n\nRESULTS\nAfter adjusting for confounders, folate was positively associated with letter search and vitamin B-12 with word recall in cross-sectional analyses. In prospective analyses, participants in the highest quartile of folate had higher verbal fluency (p<0.01) and immediate recall (p<0.05) scores compared to those in the bottom quartile. In addition, participants in the highest quartile of vitamin B-12 had significantly higher verbal fluency scores (β=0.12; 95% CI=0.02, 0.21).\n\n\nCONCLUSIONS\nFolate and vitamin B-12 were positively associated with performance in some but not all cognitive domains in older Central and Eastern Europeans. These findings do not lend unequivocal support to potential importance of folate and vitamin B-12 status for cognitive function in older age. Long-term longitudinal studies and randomised trials are required before drawing conclusions on the role of these vitamins in cognitive decline.", "title": "" }, { "docid": "5d6bd34fb5fdb44950ec5d98e77219c3", "text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.", "title": "" }, { "docid": "9a6de540169834992134eb02927d889d", "text": "In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (LingInfo, LexOnto, LMF) and present a new joint model for linguistic grounding of ontologies called LexInfo. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.", "title": "" }, { "docid": "07eb3f5527e985c33ff7132381ee266d", "text": "Since the first application of indirect composite resins, numerous advances in adhesive dentistry have been made. Furthermore, improvements in structure, composition and polymerization techniques led to the development of a second-generation of indirect resin composites (IRCs). IRCs have optimal esthetic performance, enhanced mechanical properties and reparability. Due to these characteristics they can be used for a wide range of clinical applications. IRCs can be used for inlays, onlays, crowns’ veneering material, fixed dentures prostheses and removable prostheses (teeth and soft tissue substitution), both on teeth and implants. The purpose of this article is to review the properties of these materials and describe a case series of patients treated with different type of restorations in various indications. *Corresponding author: Aikaterini Petropoulou, Clinical Instructor, Department of Prosthodontics, School of Dentistry, National and Kapodistrian University of Athens, Greece, Tel: +306932989104; E-mail: aikatpetropoulou@gmail.com Received November 10, 2013; Accepted November 28, 2013; Published November 30, 2013 Citation: Petropoulou A, Pantzari F, Nomikos N, Chronopoulos V, Kourtis S (2013) The Use of Indirect Resin Composites in Clinical Practice: A Case Series. Dentistry 3: 173. doi:10.4172/2161-1122.1000173 Copyright: © 2013 Petropoulou A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "73f5e4d9011ce7115fd7ff0be5974a14", "text": "In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.", "title": "" }, { "docid": "c6b85518156138c22331e9c38459daf4", "text": "This paper describes a novel two-degree-of-freedom robotic interface to train opening/closing of the hand and knob manipulation. The mechanical design, based on two parallelogram structures holding an exchangeable button, offers the possibility to adapt the interface to various hand sizes and finger orientations, as well as to right-handed or left-handed subjects. The interaction with the subject is measured by means of position encoders and four force sensors located close to the output measuring grasping and insertion forces. Various knobs can be mounted on the interface, including a cone mechanism to train a complete opening movement from a strongly contracted and closed hand to a large opened position. We describe the design based on measured biomechanics, the redundant safety mechanisms as well as the actuation and control architecture. Preliminary experiments show the performance of this interface and some of the possibilities it offers for the rehabilitation of hand function.", "title": "" }, { "docid": "06037639619d64c0db424363919d9150", "text": "This paper aims to provide a brief review of cloud computing, followed by an analysis of cloud computing environment using the PESTEL framework. The future implications and limitations of adopting cloud computing as an effective eco-friendly strategy to reduce carbon footprint are also discussed in the paper. This paper concludes with a recommendation to guide researchers to further examine this phenomenon. Organizations today face tough economic times, especially following the recent global financial crisis and the evidence of catastrophic climate change. International and local businesses find themselves compelled to review their strategies. They need to consider their organizational expenses and priorities and to strategically consider how best to save. Traditionally, Information Technology (IT) department is one area that would be affected negatively in the review. Continuing to fund these strategic technologies during an economic downturn is vital to organizations. It is predicted that in coming years IT resources will only be available online. More and more organizations are looking at operating smarter businesses by investigating technologies such as cloud computing, virtualization and green IT to find ways to cut costs and increase efficiencies.", "title": "" }, { "docid": "c7936a373bd021c0fe0e342b3c37e137", "text": "In this work we propose Ask Me Any Rating (AMAR), a novel content-based recommender system based on deep neural networks which is able to produce top-N recommendations leveraging user and item embeddings which are learnt from textual information describing the items. A comprehensive experimental evaluation conducted on stateof-the-art datasets showed a significant improvement over all the baselines taken into account.", "title": "" } ]
scidocsrr
99fdc4ef43c759bc406f8ab245864965
Hate Speech Detection: A Solved Problem? The Challenging Case of Long Tail on Twitter
[ { "docid": "522363d36c93b692265c42f9f3976461", "text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.", "title": "" }, { "docid": "9a52461cbd746e4e1df5748af37b58ed", "text": "Irony is a pervasive aspect of many online texts, one made all the more difficult by the absence of face-to-face contact and vocal intonation. As our media increasingly become more social, the problem of irony detection will become even more pressing. We describe here a set of textual features for recognizing irony at a linguistic level, especially in short texts created via social media such as Twitter postings or ‘‘tweets’’. Our experiments concern four freely available data sets that were retrieved from Twitter using content words (e.g. ‘‘Toyota’’) and user-generated tags (e.g. ‘‘#irony’’). We construct a new model of irony detection that is assessed along two dimensions: representativeness and relevance. Initial results are largely positive, and provide valuable insights into the figurative issues facing tasks such as sentiment analysis, assessment of online reputations, or decision making.", "title": "" }, { "docid": "79ece5e02742de09b01908668383e8f2", "text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.", "title": "" }, { "docid": "18403ce2ebb83b9207a7cece82e91ffc", "text": "Hate speech in the form of racism and sexism is commonplace on the internet (Waseem and Hovy, 2016). For this reason, there has been both an academic and an industry interest in detection of hate speech. The volume of data to be reviewed for creating data sets encourages a use of crowd sourcing for the annotation efforts. In this paper, we provide an examination of the influence of annotator knowledge of hate speech on classification models by comparing classification results obtained from training on expert and amateur annotations. We provide an evaluation on our own data set and run our models on the data set released by Waseem and Hovy (2016). We find that amateur annotators are more likely than expert annotators to label items as hate speech, and that systems trained on expert annotations outperform systems trained on amateur annotations.", "title": "" }, { "docid": "05696249c57c4b0a52ddfd5598a34f00", "text": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.", "title": "" } ]
[ { "docid": "e882efea987b4f248c0374c1555c668a", "text": "This paper describes the Sonic Banana, a bend-sensor based alternative MIDI controller.", "title": "" }, { "docid": "759f38a59c5cd0768b3de553ec987bc0", "text": "In this paper we describe a database of static images of human faces. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities. Database contains 4,160 static images (in visible and infrared spectrum) of 130 subjects. Images from different quality cameras should mimic real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios. In addition to database description, this paper also elaborates on possible uses of the database and proposes a testing protocol. A baseline Principal Component Analysis (PCA) face recognition algorithm was tested following the proposed protocol. Other researchers can use these test results as a control algorithm performance score when testing their own algorithms on this dataset. Database is available to research community through the procedure described at www.scface.org .", "title": "" }, { "docid": "869f492020b06dbd7795251858beb6f7", "text": "Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.", "title": "" }, { "docid": "ff0644de5cd474dbd858c96bb4c76fd9", "text": "With the growth of the Internet of Things, many insecure embedded devices are entering into our homes and businesses. Some of these web-connected devices lack even basic security protections such as secure password authentication. As a result, thousands of IoT devices have already been infected with malware and enlisted into malicious botnets and many more are left vulnerable to exploitation. In this paper we analyze the practical security level of 16 popular IoT devices from high-end and low-end manufacturers. We present several low-cost black-box techniques for reverse engineering these devices, including software and fault injection based techniques for bypassing password protection. We use these techniques to recover device rmware and passwords. We also discover several common design aws which lead to previously unknown vulnerabilities. We demonstrate the e ectiveness of our approach by modifying a laboratory version of the Mirai botnet to automatically include these devices. We also discuss how to improve the security of IoT devices without signi cantly increasing their cost.", "title": "" }, { "docid": "349773087b8d196f1e9e83463018a52b", "text": "We introduce two appearance-based methods for clustering a set of images of 3-D objects, acquired under varying illumination conditions, into disjoint subsets corresponding to individual objects. The first algorithm is based on the concept of illumination cones. According to the theory, the clustering problem is equivalent to finding convex polyhedral cones in the high-dimensional image space. To efficiently determine the conic structures hidden in the image data, we introduce the concept of conic affinity which measures the likelihood of a pair of images belonging to the same underlying polyhedral cone. For the second method, we introduce another affinity measure based on image gradient comparisons. The algorithm operates directly on the image gradients by comparing the magnitudes and orientations of the image gradient at each pixel. Both methods have clear geometric motivations, and they operate directly on the images without the need for feature extraction or computation of pixel statistics. We demonstrate experimentally that both algorithms are surprisingly effective in clustering images acquired under varying illumination conditions with two large, well-known image data sets.", "title": "" }, { "docid": "b82750baa5a775a00b72e19d3fd5d2a1", "text": "We assessed the rate of detection rate of recurrent prostate cancer by PET/CT using anti-3-18F-FACBC, a new synthetic amino acid, in comparison to that using 11C-choline as part of an ongoing prospective single-centre study. Included in the study were 15 patients with biochemical relapse after initial radical treatment of prostate cancer. All the patients underwent anti-3-18F-FACBC PET/CT and 11C-choline PET/CT within a 7-day period. The detection rates using the two compounds were determined and the target–to-background ratios (TBR) of each lesion are reported. No adverse reactions to anti-3-18F-FACBC PET/CT were noted. On a patient basis, 11C-choline PET/CT was positive in 3 patients and negative in 12 (detection rate 20 %), and anti-3-18F-FACBC PET/CT was positive in 6 patients and negative in 9 (detection rate 40 %). On a lesion basis, 11C-choline detected 6 lesions (4 bone, 1 lymph node, 1 local relapse), and anti-3-18F-FACBC detected 11 lesions (5 bone, 5 lymph node, 1 local relapse). All 11C-choline-positive lesions were also identified by anti-3-18F-FACBC PET/CT. The TBR of anti-3-18F-FACBC was greater than that of 11C-choline in 8/11 lesions, as were image quality and contrast. Our preliminary results indicate that anti-3-18F-FACBC may be superior to 11C-choline for the identification of disease recurrence in the setting of biochemical failure. Further studies are required to assess efficacy of anti-3-18F-FACBC in a larger series of prostate cancer patients.", "title": "" }, { "docid": "61f9711b65d142b5537b7d3654bbbc3c", "text": "Now-a-days as there is prohibitive demand for agricultural industry, effective growth and improved yield of fruit is necessary and important. For this purpose farmers need manual monitoring of fruits from harvest till its progress period. But manual monitoring will not give satisfactory result all the times and they always need satisfactory advice from expert. So it requires proposing an efficient smart farming technique which will help for better yield and growth with less human efforts. We introduce a technique which will diagnose and classify external disease within fruits. Traditional system uses thousands of words which lead to boundary of language. Whereas system that we have come up with, uses image processing techniques for implementation as image is easy way for conveying. In the proposed work, OpenCV library is applied for implementation. K-means clustering method is applied for image segmentation, the images are catalogue and mapped to their respective disease categories on basis of four feature vectors color, morphology, texture and structure of hole on the fruit. The system uses two image databases, one for implementation of query images and the other for training of already stored disease images. Artificial Neural Network (ANN) concept is used for pattern matching and classification of diseases.", "title": "" }, { "docid": "48ea93efe1a1219bfb1a6b48c20bab99", "text": "Understanding the content of user's image posts is a particularly interesting problem in social networks and web settings. Current machine learning techniques focus mostly on curated training sets of image-label pairs, and perform image classification given the pixels within the image. In this work we instead leverage the wealth of information available from users: firstly, we employ user hashtags to capture the description of image content; and secondly, we make use of valuable contextual information about the user. We show how user metadata (age, gender, etc.) combined with image features derived from a convolutional neural network can be used to perform hashtag prediction. We explore two ways of combining these heterogeneous features into a learning framework: (i) simple concatenation; and (ii) a 3-way multiplicative gating, where the image model is conditioned on the user metadata. We apply these models to a large dataset of de-identified Facebook posts and demonstrate that modeling the user can significantly improve the tag prediction quality over current state-of-the-art methods.", "title": "" }, { "docid": "5663c9fc6eb66c718235e51d8932dab4", "text": "As the number of academic papers and new technologies soars, it has been increasingly difficult for researchers, especially beginners, to enter a new research field. Researchers often need to study a promising paper in depth to keep up with the forefront of technology. Traditional Query-Oriented study method is time-consuming and even tedious. For a given paper, existent academic search engines like Google Scholar tend to recommend relevant papers, failing to reveal the knowledge structure. The state-of-the-art MapOriented study methods such as AMiner and AceMap can structure scholar information, but they’re too coarse-grained to dig into the underlying principles of a specific paper. To address this problem, we propose a Study-Map Oriented method and a novel model called RIDP (Reference Injection based Double-Damping PageRank) to help researchers study a given paper more efficiently and thoroughly. RIDP integrates newly designed Reference Injection based Topic Analysis method and Double-Damping PageRank algorithm to mine a Study Map out of massive academic papers in order to guide researchers to dig into the underlying principles of a specific paper. Experiment results on real datasets and pilot user studies indicate that our method can help researchers acquire knowledge more efficiently, and grasp knowledge structure systematically.", "title": "" }, { "docid": "6f66eebbe5408c3f4d5118b639fcfec0", "text": "Various types of incidents and disasters cause huge loss to people's lives and property every year and highlight the need to improve our capabilities to handle natural, health, and manmade emergencies. How to develop emergency management systems that can provide critical decision support to emergency management personnel is considered a crucial issue by researchers and practitioners. Governments, such as the USA, the European Commission, and China, have recognized the importance of emergency management and funded national level emergency management projects during the past decade. Multi-criteria decision making (MCDM) refers to the study of methods and procedures by which concerns about multiple and often competing criteria can be formally incorporated into the management planning process. Over the years, it has evolved as an important field of Operations Research, focusing on issues as: analyzing and evaluating of incompatible criteria and alternatives; modeling decision makers' preferences; developing MCDM-based decision support systems; designing MCDM research paradigm; identifying compromising solutions of multi-criteria decision making problems. İn emergency management, MCDM can be used to evaluate decision alternatives and assist decision makers in making immediate and effective responses under pressures and uncertainties. However, although various approaches and technologies have been developed in the MCDM field to handle decision problems with conflicting criteria in some domains, effective decision support in emergency management requires in depth analysis of current MCDM methods and techniques, and adaptation of these techniques specifically for emergency management. In terms of this basic fact, the guest editors determined that the purpose of this special issue should be “to assess the current state of knowledge about MCDM in emergency management and to generate and throw open for discussion, more ideas, hypotheses and theories, the specific objective being to determine directions for further research”. For this purpose, this special issue presents some new progress about MCDM in emergency management that is expected to trigger thought and deepen further research. For this purpose, 11 papers [1–11] were selected from 41 submissions related to MCDM in emergency management from different countries and regions. All the selected papers went through a standard review process of the journal and the authors of all the papers made necessary revision in terms of reviewing comments. In the selected 11 papers, they can be divided into three categories. The first category focuses on innovative MCDM methods for logistics management, which includes 3 papers. The first paper written by Liberatore et al. [1] is to propose a hierarchical compromise model called RecHADS method for the joint optimization of recovery operations and distribution of emergency goods in humanitarian logistics. In the second paper, Peng et al. [2] applies a system dynamics disruption analysis approach for inventory and logistics planning in the post-seismic supply chain risk management. In the third paper, Rath and Gutjahr [3] present an exact solution method and a mathheuristic method to solve the warehouse location routing problem in disaster relief and obtained the good performance. In the second category, 4 papers about the MCDM-based risk assessment and risk decision-making methods in emergency response and emergency management are selected. In terms of the previous order, the fourth paper [4] is to integrate TODIM method and FSE method to formulate a new TODIM-FSE method for risk decision-making support in oil spill response. The fifth paper [5] is to utilize a fault tree analysis (FTA) method to give a risk decision-making solution to emergency response, especially in the case of the H1N1 infectious diseases. Similarly, the sixth paper [6] focuses on an analytic network process (ANP) method for risk assessment and decision analysis, and while the seventh paper [7] applies cumulative prospect theory (CPT) method to risk decision analysis in emergency response. The papers in the third category emphasize on the MCDM methods for disaster assessment and emergence management and four papers are included into this category. In the similar order, the eighth paper [8] is to propose a multi-event and multi-criteria method to evaluate the situation of disaster resilience. In the ninth paper, Kou et al. [9] develop an integrated expert system for fast disaster assessment and obtain the good evaluation performance. Similarly, the 10th paper [10] proposes a multi-objective programming approach to make the optimal decisions for oil-importing plan considering country risk with extreme events. Finally, the last paper [11] in this special issue is to develop a community-based collaborative information system to manage natural and manmade disasters. The guest editors hope that the papers published in this special issue would be of value to academic researchers and business practitioners and would provide a clearer sense of direction for further research, as well as facilitating use of existing methodologies in a more productive manner. The guest editors would like to place on record their sincere thanks to Prof. Stefan Nickel, the Editor-in-Chief of Computers & Operations Research, for this very special opportunity provided to us for contributing to this special issue. The guest editors have to thank all the referees for their kind support and help. Last, but not least, the guest editors would express the gratitude to all authors of submissions in this special issue for their contribution. Without the support of the authors and the referees, it would have been", "title": "" }, { "docid": "be01b960154a975a36ad568cf17b5aca", "text": "ing Interactions Based on Message Sets Svend Frr 1 and Gul Agha 2. 1 Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94303 2 University of Illinois, 1304 W. Springfield Avenue, Urbana, IL 61801 Abs t rac t . An important requirement of programming languages for distributed systems is to provide abstractions for coordination. A common type of coordination requires reactivity in response to arbitrary communication patterns. We have developed a communication model in which concurrent objects can be activated by sets of messages. Specifically, our model allows direct and abstract expression of common interaction patterns found in concurrent systems. For example, the model captures multiple clients that collectively invoke shared servers as a single activation. Furthermore, it supports definition of individual clients that concurrently invoke multiple servers and wait for subsets of the returned reply messages. Message sets are dynamically defined using conjunctive and disjunctive combinators that may depend o n the patterns of messages. The model subsumes existing models for multiRPC and multi-party synchronization within a single, uniform activation framework. 1 I n t r o d u c t i o n Distributed objects are often reactive, i.e. they carry out their actions in response to received response. Tradit ional object-oriented languages require one to one correspondence between response and a receive message: i.e. each response is caused by exactly one message. However, many coordination schemes involve object behaviors whose logical cause is a set of messages rather than a single message. For example, consider a transaction manager in a distributed database system. In order to commit a distributed transaction, the manager must coordinate the action taken at each site involved in the transaction. A two-phase commit protocol is a possible implementation of this coordination pattern. In carrying out a two-phase commit protocol, the manager first sends out a status inquiry to all the sites involved. In response to a status inquiry, each site sends a positive reply if it can commit the transaction; a site sends back a negative reply if it cannot commit the transaction. After sending out inquiries, the manager becomes a reactive object waiting for sites to reply. The logical structure of the manager is to react to a set of replies rather than a single reply: if a positive reply is received from all sites, the manager decides to commit the transaction; if a negative reply is received from any site, the manager must abort the transaction. In tradit ional object-oriented languages, the programmer must implement a response to a set of messages in terms of sequences of responses to single messages. * The reported work was carried out while the first author was affiliated with the University of Illinois. The current emaJl addresses are f rolund@hpl .hp. corn and agha@cs.uiue, edu", "title": "" }, { "docid": "d56ff4b194c123b19a335e00b38ea761", "text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.", "title": "" }, { "docid": "1407b7bd4f597dd64642150629349e5e", "text": "This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a suppori vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique i n two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (handcrafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.", "title": "" }, { "docid": "e632895c1ab1b994f64ef03260b91acb", "text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.", "title": "" }, { "docid": "34fdd06eb5e5d2bf9266c6852710bed2", "text": "If subjects are shown an angry face as a target visual stimulus for less than forty milliseconds and are then immediately shown an expressionless mask, these subjects report seeing the mask but not the target. However, an aversively conditioned masked target can elicit an emotional response from subjects without being consciously perceived,. Here we study the mechanism of this unconsciously mediated emotional learning. We measured neural activity in volunteer subjects who were presented with two angry faces, one of which, through previous classical conditioning, was associated with a burst of white noise. In half of the trials, the subjects' awareness of the angry faces was prevented by backward masking with a neutral face. A significant neural response was elicited in the right, but not left, amygdala to masked presentations of the conditioned angry face. Unmasked presentations of the same face produced enhanced neural activity in the left, but not right, amygdala. Our results indicate that, first, the human amygdala can discriminate between stimuli solely on the basis of their acquired behavioural significance, and second, this response is lateralized according to the subjects' level of awareness of the stimuli.", "title": "" }, { "docid": "dcac926ace799d43fedb9c27056a7729", "text": "Jinsight is a tool for exploring a program’s run-time behavior visually. It is helpful for performance analysis, debugging, and any task in which you need to better understand what your Java program is really doing. Jinsight is designed specifically with object-oriented and multithreaded programs in mind. It exposes many facets of program behavior that elude conventional tools. It reveals object lifetimes and communication, and attendant performance bottlenecks. It shows thread interactions, deadlocks, and garbage collector activity. It can also help you find and fix memory leaks, which remain a hazard despite garbage collection. A user explores program execution through one or more views. Jinsight offers several types of views, each geared toward distinct aspects of object-oriented and multithreaded program behavior. The user has several different perspectives from which to discern performance problems, unexpected behavior, or bugs small and large. Moreover, the views are linked to each other in many ways, allowing navigation from one view to another. Navigation makes the collection of views far more powerful than the sum of their individual strengths.", "title": "" }, { "docid": "73b76fa13443a4c285dc9a97cfaa22dd", "text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.", "title": "" }, { "docid": "cd8de770f7c6dbe897d308d0cec23dc0", "text": "We present Tartanian, a game theory-based player for headsup no-limit Texas Hold’em poker. Tartanian is built from three components. First, to deal with the virtually infinite strategy space of no-limit poker, we develop a discretized betting model designed to capture the most important strategic choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries.", "title": "" }, { "docid": "0e98010ded0712ab0e2f78af0a476c86", "text": "This paper presents a system that uses symbolic representations of audio concepts as words for the descriptions of audio tracks, that enable it to go beyond the state of the art, which is audio event classification of a small number of audio classes in constrained settings, to large-scale classification in the wild. These audio words might be less meaningful for an annotator but they are descriptive for computer algorithms. We devise a random-forest vocabulary learning method with an audio word weighting scheme based on TF-IDF and TD-IDD, so as to combine the computational simplicity and accurate multi-class classification of the random forest with the data-driven discriminative power of the TF-IDF/TD-IDD methods. The proposed random forest clustering with text-retrieval methods significantly outperforms two state-of-the-art methods on the dry-run set and the full set of the TRECVID MED 2010 dataset.", "title": "" }, { "docid": "1df27c9c3cdccd66eadb8916cb5f7283", "text": "Network function virtualization (NFV) is a promising technique aimed at reducing capital expenditures (CAPEX) and operating expenditures (OPEX), and improving the flexibility and scalability of an entire network. In contrast to traditional dispatching, NFV can separate network functions from proprietary infrastructure and gather these functions into a resource pool that can efficiently modify and adjust service function chains (SFCs). However, this emerging technique has some challenges. A major problem is reliability, which involves ensuring the availability of deployed SFCs, namely, the probability of successfully chaining a series of virtual network functions while considering both the feasibility and the specific requirements of clients, because the substrate network remains vulnerable to earthquakes, floods, and other natural disasters. Based on the premise of users’ demands for SFC requirements, we present an ensure reliability cost saving algorithm to reduce the CAPEX and OPEX of telecommunication service providers by reducing the reliability of the SFC deployments. The results of extensive experiments indicate that the proposed algorithms perform efficiently in terms of the blocking ratio, resource consumption, time consumption, and the first block.", "title": "" } ]
scidocsrr
a0ca633b598eb5e9d27c8b8087043df4
End-to-End Training of Hybrid CNN-CRF Models for Stereo
[ { "docid": "c29349c32074392e83f51b1cd214ec8a", "text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "title": "" }, { "docid": "4421a42fc5589a9b91215b68e1575a3f", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "9dbf1ae31558c80aff4edf94c446b69e", "text": "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.", "title": "" } ]
[ { "docid": "2ce9d2923b6b8be5027e23fb905e8b4d", "text": "A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.", "title": "" }, { "docid": "41c718697d19ee3ca0914255426a38ab", "text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.", "title": "" }, { "docid": "223b74ccdafcd3fafa372cd6a4fbb6cb", "text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%e99% and a false positive rate of 0.06% e2%, under all tested datasets and settings. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "589a96c8932c9657b2a2854de6390b1f", "text": "In this paper, proactive resource allocation based on user location for point-to-point communication over fading channels is introduced, whereby the source must transmit a packet when the user requests it within a deadline of a single time slot. We introduce a prediction model in which the source predicts the request arrival $T_p$ slots ahead, where $T_p$ denotes the prediction window (PW) size. The source allocates energy to transmit some bits proactively for each time slot of the PW with the objective of reducing the transmission energy over the non-predictive case. The requests are predicted based on the user location utilizing the prior statistics about the user requests at each location. We also assume that the prediction is not perfect. We propose proactive scheduling policies to minimize the expected energy consumption required to transmit the requested packets under two different assumptions on the channel state information at the source. In the first scenario, offline scheduling, we assume the channel states are known a-priori at the source at the beginning of the PW. In the second scenario, online scheduling, it is assumed that the source has causal knowledge of the channel state. Numerical results are presented showing the gains achieved by using proactive scheduling policies compared with classical (reactive) networks. Simulation results also show that increasing the PW size leads to a significant reduction in the consumed transmission energy even with imperfect prediction.", "title": "" }, { "docid": "2aaafa2da0ff13d91c37c5fd3c1c9ccc", "text": "The development of pharmacotherapies for cocaine addiction has been disappointingly slow. However, new neurobiological knowledge of how the brain is changed by chronic pharmacological insult with cocaine is revealing novel targets for drug development. Certain drugs currently being tested in clinical trials tap into the underlying cocaine-induced neuroplasticity, including drugs promoting GABA or inhibiting glutamate transmission. Armed with rationales derived from a neurobiological perspective that cocaine addiction is a pharmacologically induced disease of neuroplasticity in brain circuits mediating normal reward learning, one can expect novel pharmacotherapies to emerge that directly target the biological pathology of addiction.", "title": "" }, { "docid": "961cc1dc7063706f8f66fc136da41661", "text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.", "title": "" }, { "docid": "2c92d42311f9708b7cb40f34551315e0", "text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.", "title": "" }, { "docid": "bd516d0b64e483d2210b20e4905ecd52", "text": "With the rapid growth of the internet and the spread of the information contained therein, the volume of information available on the web is more than the ability of users to manage, capture and keep the information up to date. One solution to this problem are personalization and recommender systems. Recommender systems use the comments of the group of users so that, to help people in that group more effectively to identify their favorite items from a huge set of choices. In recent years, the web has seen very strong growth in the use of blogs. Considering the high volume of information in blogs, bloggers are in trouble to find the desired information and find blogs with similar thoughts and desires. Therefore, considering the mass of information for the blogs, a blog recommender system seems to be necessary. In this paper, by combining different methods of clustering and collaborative filtering, personalized recommender system for Persian blogs is suggested.", "title": "" }, { "docid": "890a3fede570ee6777c0af7332aa0d8d", "text": "As mobile instant messaging has become a major means of communication with the widespread use of smartphones, emoticons, symbols that are meant to indicate particular emotions in instant messages, have also developed into various forms. The primary purpose of this study is to classify the usage patterns of emoticons focusing on a particular variant known as \"stickers\" to observe individual and social characteristics of emoticon use and reinterpret the meaning of emoticons in instant messages. A qualitative approach with an in-depth semi-structured interview was used to uncover the motive in using emoticon stickers. The study suggests that besides using emoticon stickers for expressing emotions, users may have other motives: strategic and functional purposes.", "title": "" }, { "docid": "d4aca467d0014b2c2359f5609a1a199b", "text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.", "title": "" }, { "docid": "d7e53788cbe072bdf26ea71c0a91c2b3", "text": "3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.", "title": "" }, { "docid": "f9ebbf082da4d72c32705b74d32e864c", "text": "One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows stateof-the-art performance in multi-organ segmentation.", "title": "" }, { "docid": "66d6f514c6bce09110780a1130b64dfe", "text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.", "title": "" }, { "docid": "bd7f4a27628506eb707918c990704405", "text": "A multi database model of distributed information retrieval is presented in which people are assumed to have access to many searchable text databases In such an environment full text information retrieval consists of discovering database contents ranking databases by their expected ability to satisfy the query searching a small number of databases and merging results returned by di erent databases This paper presents algorithms for each task It also discusses how to reorganize conventional test collections into multi database testbeds and evaluation methodologies for multi database experiments A broad and diverse group of experimental results is presented to demonstrate that the algorithms are e ective e cient robust and scalable", "title": "" }, { "docid": "a25e2540e97918b954acbb6fdee57eb7", "text": "Tweet streams provide a variety of real-life and real-time information on social events that dynamically change over time. Although social event detection has been actively studied, how to efficiently monitor evolving events from continuous tweet streams remains open and challenging. One common approach for event detection from text streams is to use single-pass incremental clustering. However, this approach does not track the evolution of events, nor does it address the issue of efficient monitoring in the presence of a large number of events. In this paper, we capture the dynamics of events using four event operations (create, absorb, split, and merge), which can be effectively used to monitor evolving events. Moreover, we propose a novel event indexing structure, called Multi-layer Inverted List (MIL), to manage dynamic event databases for the acceleration of large-scale event search and update. We thoroughly study the problem of nearest neighbour search using MIL based on upper bound pruning, along with incremental index maintenance. Extensive experiments have been conducted on a large-scale real-life tweet dataset. The results demonstrate the promising performance of our event indexing and monitoring methods on both efficiency and effectiveness.", "title": "" }, { "docid": "096772152c72d8c8fb1650a825a47d2b", "text": "The analysis of the topology and organization of brain networks is known to greatly benefit from network measures in graph theory. However, to evaluate dynamic changes of brain functional connectivity, more sophisticated quantitative metrics characterizing temporal evolution of brain topological features are required. To simplify conversion of time-varying brain connectivity to a static graph representation is straightforward but the procedure loses temporal information that could be critical in understanding the brain functions. To extend the understandings of functional segregation and integration to a dynamic fashion, we recommend dynamic graph metrics to characterise temporal changes of topological features of brain networks. This study investigated functional segregation and integration of brain networks over time by dynamic graph metrics derived from EEG signals during an experimental protocol: performance of complex flight simulation tasks with multiple levels of difficulty. We modelled time-varying brain functional connectivity as multi-layer networks, in which each layer models brain connectivity at time window $t+\\Delta t$ . Dynamic graph metrics were calculated to quantify temporal and topological properties of the network. Results show that brain networks under the performance of complex tasks reveal a dynamic small-world architecture with a number of frequently connected nodes or hubs, which supports the balance of information segregation and integration in brain over time. The results also show that greater cognitive workloads caused by more difficult tasks induced a more globally efficient but less clustered dynamic small-world functional network. Our study illustrates that task-related changes of functional brain network segregation and integration can be characterized by dynamic graph metrics.", "title": "" }, { "docid": "a0285beac2a4e94f295df24033c61c7a", "text": "EUCAST expert rules have been developed to assist clinical microbiologists and describe actions to be taken in response to specific antimicrobial susceptibility test results. They include recommendations on reporting, such as inferring susceptibility to other agents from results with one, suppression of results that may be inappropriate, and editing of results from susceptible to intermediate or resistant or from intermediate to resistant on the basis of an inferred resistance mechanism. They are based on current clinical and/or microbiological evidence. EUCAST expert rules also include intrinsic resistance phenotypes and exceptional resistance phenotypes, which have not yet been reported or are very rare. The applicability of EUCAST expert rules depends on the MIC breakpoints used to define the rules. Setting appropriate clinical breakpoints, based on treating patients and not on the detection of resistance mechanisms, may lead to modification of some expert rules in the future.", "title": "" }, { "docid": "6dc9ebf5dea1c78e1688a560f241f804", "text": "This paper reports finding from a study carried out in a remote rural area of Bangladesh during December 2000. Nineteen key informants were interviewed for collecting data on domestic violence against women. Each key informant provided information about 10 closest neighbouring ever-married women covering a total of 190 women. The questionnaire included information about frequency of physical violence, verbal abuse, and other relevant information, including background characteristics of the women and their husbands. 50.5% of the women were reported to be battered by their husbands and 2.1% by other family members. Beating by the husband was negatively related with age of husband: the odds of beating among women with husbands aged less than 30 years were six times of those with husbands aged 50 years or more. Members of micro-credit societies also had higher odds of being beaten than non-members. The paper discusses the possibility of community-centred interventions by raising awareness about the violation of human rights issues and other legal and psychological consequences to prevent domestic violence against women.", "title": "" }, { "docid": "bf232413f2c1ba11bfa0ccbba3ed4010", "text": "Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.", "title": "" }, { "docid": "da416ce58897f6f86d9cd7b0de422508", "text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.", "title": "" } ]
scidocsrr
c5c495f5eac4239f4d35d20581d38d58
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
[ { "docid": "a026cb81bddfa946159d02b5bb2e341d", "text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.", "title": "" }, { "docid": "f8fe22b2801a250a52e3d19ae23804e9", "text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.", "title": "" } ]
[ { "docid": "e05fc780d1f3fd4061918e50f5dd26a0", "text": "The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.", "title": "" }, { "docid": "c74cd5b9753579517462909bd196ad90", "text": "Interactions around money and financial services are a critical part of our lives on and off-line. New technologies and new ways of interacting with these technologies are of huge interest; they enable new business models and ways of making sense of this most important aspect of our everyday lives. At the same time, money is an essential element in HCI research and design. This workshop is intended to bring together researchers and practitioners involved in the design and use of systems that combine digital and new media with monetary and financial interactions to build on an understanding of these technologies and their impacts on users' behaviors. The workshop will focus on social, technical, and economic aspects around everyday user interactions with money and emerging financial technologies and systems.", "title": "" }, { "docid": "88ca6c25c4be7523eea29d909bd84813", "text": "A health risk appraisal function has been developed for the prediction of stroke using the Framingham Study cohort. The stroke risk factors included in the profile are age, systolic blood pressure, the use of antihypertensive therapy, diabetes mellitus, cigarette smoking, prior cardiovascular disease (coronary heart disease, cardiac failure, or intermittent claudication), atrial fibrillation, and left ventricular hypertrophy by electrocardiogram. Based on 472 stroke events occurring during 10 years' follow-up from biennial examinations 9 and 14, stroke probabilities were computed using the Cox proportional hazards model for each sex based on a point system. On the basis of the risk factors in the profile, which can be readily determined on routine physical examination in a physician's office, stroke risk can be estimated. An individual's risk can be related to the average risk of stroke for persons of the same age and sex. The information that one's risk of stroke is several times higher than average may provide the impetus for risk factor modification. It may also help to identify persons at substantially increased stroke risk resulting from borderline levels of multiple risk factors such as those with mild or borderline hypertension and facilitate multifactorial risk factor modification.", "title": "" }, { "docid": "9954793c44b1b8fc87c0ae8724e0e4de", "text": "The Khanya project has been equipping schools and educators with ICT skills and equipment to be used in the curriculum delivery in South Africa. However, research and anecdotal evidence show that there is low adoption rate of ICT among educators in Khanya schools. This interpretive study sets out to analyse the factors which are preventing the educators from using the technology in their work. The perspective of limited access and/or use of ICT as deprivation of capabilities provides a conceptual base for this paper. We employed Sen’s Capability Approach as a conceptual lens to examine the educators’ situation regarding ICT for teaching and learning. Data was collected through in-depth interviews with fourteen educators and two Khanya personnel. The results of the study show that there are a number of factors (personal, social and environmental) which are preventing the educators from realising their potential capabilities from the ICT.", "title": "" }, { "docid": "cb7b53be8ef7cd9330445668f8f0eee6", "text": "Humans have an innate tendency to anthropomorphize surrounding entities and have always been fascinated by the creation of machines endowed with human-inspired capabilities and traits. In the last few decades, this has become a reality with enormous advances in hardware performance, computer graphics, robotics technology, and artificial intelligence. New interdisciplinary research fields have brought forth cognitive robotics aimed at building a new generation of control systems and providing robots with social, empathetic and affective capabilities. This paper presents the design, implementation, and test of a human-inspired cognitive architecture for social robots. State-of-the-art design approaches and methods are thoroughly analyzed and discussed, cases where the developed system has been successfully used are reported. The tests demonstrated the system’s ability to endow a social humanoid robot with human social behaviors and with in-silico robotic emotions.", "title": "" }, { "docid": "8f089d55c0ce66db7bbf27476267a8e5", "text": "Planning radar sites is very important for several civilian and military applications. Depending on the security or defence issue different requirements exist regarding the radar coverage and the radar sites. QSiteAnalysis offers several functions to automate, improve and speed up this highly complex task. Wave propagation effects such as diffraction, refraction, multipath and atmospheric attenuation are considered for the radar coverage calculation. Furthermore, an automatic optimisation of the overall coverage is implemented by optimising the radar sites. To display the calculation result, the calculated coverage is visualised in 2D and 3D. Therefore, QSiteAnalysis offers several functions to improve and automate radar site studies.", "title": "" }, { "docid": "aabf75855e39682b353c46332bc218db", "text": "Semantic Web Mining is the outcome of two new and fast developing domains: Semantic Web and Data Mining. The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Data Mining is the nontrivial process of identifying valid, previously unknown, potentially useful patterns in data. Semantic Web Mining refers to the application of data mining techniques to extract knowledge from World Wide Web or the area of data mining that refers to the use of algorithms for extracting patterns from resources distributed over in the web. The aim of Semantic Web Mining is to discover and retrieve useful and interesting patterns from a huge set of web data. This web data consists of different kind of information, including web structure data, web log data and user profiles data. Semantic Web Mining is a relatively new area, broadly interdisciplinary, attracting researchers from: computer science, information retrieval specialists and experts from business studies fields. Web data mining includes web content mining, web structure mining and web usage mining. All of these approaches attempt to extract knowledge from the web, produce some useful results from the knowledge extracted and apply these results to the real world problems. To improve the internet service quality and increase the user click rate on a specific website, it is necessary for a web developer to know what the user really want to do, predict which pages the user is potentially interested in. In this paper, various techniques for Semantic Web mining like web content mining, web usage mining and web structure mining are discussed. Our main focus is on web usage mining and its application in web personalization. Study shows that the accuracy of recommendation system has improved significantly with the use of semantic web mining in web personalization.", "title": "" }, { "docid": "d4ffeb204691f9a9188e8deecaf2d811", "text": "Salsify is a new architecture for real-time Internet video that tightly integrates a video codec and a network transport protocol, allowing it to respond quickly to changing network conditions and avoid provoking packet drops and queueing delays. To do this, Salsify optimizes the compressed length and transmission time of each frame, based on a current estimate of the network’s capacity; in contrast, existing systems generally control longer-term metrics like frame rate or bit rate. Salsify’s per-frame optimization strategy relies on a purely functional video codec, which Salsify uses to explore alternative encodings of each frame at different quality levels. We developed a testbed for evaluating real-time video systems end-to-end with reproducible video content and network conditions. Salsify achieves lower video delay and, over variable network paths, higher visual quality than five existing systems: FaceTime, Hangouts, Skype, and WebRTC’s reference implementation with and without scalable video coding.", "title": "" }, { "docid": "66878197b06f3fac98f867d5457acafe", "text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.", "title": "" }, { "docid": "193aee1131ce05d5d4a4316871c193b8", "text": "In this paper, we discuss wireless sensor and networking technologies for swarms of inexpensive aquatic surface drones in the context of the HANCAD project. The goal is to enable the swarm to perform maritime tasks such as sea-border patrolling and environmental monitoring, while keeping the cost of each drone low. Communication between drones is essential for the success of the project. Preliminary experiments show that XBee modules are promising for energy efficient multi-hop drone-to-drone communication.", "title": "" }, { "docid": "2ad8723c9fce1a6264672f41824963f8", "text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.", "title": "" }, { "docid": "37572963400c8a78cef3cd4a565b328e", "text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.", "title": "" }, { "docid": "37642371bbcc3167f96548d02ccd832e", "text": "The manipulation of light-matter interactions in two-dimensional atomically thin crystals is critical for obtaining new optoelectronic functionalities in these strongly confined materials. Here, by integrating chemically grown monolayers of MoS2 with a silver-bowtie nanoantenna array supporting narrow surface-lattice plasmonic resonances, a unique two-dimensional optical system has been achieved. The enhanced exciton-plasmon coupling enables profound changes in the emission and excitation processes leading to spectrally tunable, large photoluminescence enhancement as well as surface-enhanced Raman scattering at room temperature. Furthermore, due to the decreased damping of MoS2 excitons interacting with the plasmonic resonances of the bowtie array at low temperatures stronger exciton-plasmon coupling is achieved resulting in a Fano line shape in the reflection spectrum. The Fano line shape, which is due to the interference between the pathways involving the excitation of the exciton and plasmon, can be tuned by altering the coupling strengths between the two systems via changing the design of the bowties lattice. The ability to manipulate the optical properties of two-dimensional systems with tunable plasmonic resonators offers a new platform for the design of novel optical devices with precisely tailored responses.", "title": "" }, { "docid": "3cf458392fb61a5e70647c9c951d5db8", "text": "This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction.", "title": "" }, { "docid": "f0f7bd0223d69184f3391aaf790a984d", "text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.", "title": "" }, { "docid": "99efebd647fa083fab4e0f091b0b471b", "text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "be427b129a89edb6da1b21c4f8df526b", "text": "Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the system’s overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism and it is argued that commitments (pledges to undertake a specified course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.", "title": "" }, { "docid": "42167e7708bb73b08972e15a44a6df02", "text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "title": "" }, { "docid": "374383490d88240b410a14a185ff082e", "text": "A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.", "title": "" }, { "docid": "96da5252dac0eb0010a49519592c4104", "text": "Three-level converters are becoming a realistic alternative to the conventional converters in high-power wind-energy applications. In this paper, a complete analytical strategy to model a back-to-back three-level converter is described. This tool permits us to adapt the control strategy to the specific application. Moreover, the model of different loads can be incorporated to the overall model. Both control strategy and load models are included in the complete system model. The proposed model pays special attention to the unbalance in the capacitors' voltage of three-level converters, including the dynamics of the capacitors' voltage. In order to validate the model and the control strategy proposed in this paper, a 3-MW three-level back-to-back power converter used as a power conditioning system of a variable speed wind turbine has been simulated. Finally, the described strategy has been implemented in a 50-kVA scalable prototype as well, providing a satisfactory performance", "title": "" } ]
scidocsrr
bb030d1ba2e162693719dacbe2f7d80d
HDFI: Hardware-Assisted Data-Flow Isolation
[ { "docid": "ef95b5b3a0ff0ab0907565305d597a9d", "text": "Control flow defenses against ROP either use strict, expensive, but strong protection against redirected RET instructions with shadow stacks, or much faster but weaker protections without. In this work we study the inherent overheads of shadow stack schemes. We find that the overhead is roughly 10% for a traditional shadow stack. We then design a new scheme, the parallel shadow stack, and show that its performance cost is significantly less: 3.5%. Our measurements suggest it will not be easy to improve performance on current x86 processors further, due to inherent costs associated with RET and memory load/store instructions. We conclude with a discussion of the design decisions in our shadow stack instrumentation, and possible lighter-weight alternatives.", "title": "" }, { "docid": "e9ba4e76a3232e25233a4f5fe206e8ba", "text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.", "title": "" }, { "docid": "065e6db1710715ce5637203f1749e6f6", "text": "Software fault isolation (SFI) is an effective mechanism to confine untrusted modules inside isolated domains to protect their host applications. Since its debut, researchers have proposed different SFI systems for many purposes such as safe execution of untrusted native browser plugins. However, most of these systems focus on the x86 architecture. Inrecent years, ARM has become the dominant architecture for mobile devices and gains in popularity in data centers.Hence there is a compellingneed for an efficient SFI system for the ARM architecture. Unfortunately, existing systems either have prohibitively high performance overhead or place various limitations on the memory layout and instructions of untrusted modules.\n In this paper, we propose ARMlock, a hardware-based fault isolation for ARM. It uniquely leverages the memory domain support in ARM processors to create multiple sandboxes. Memory accesses by the untrusted module (including read, write, and execution) are strictly confined by the hardware,and instructions running inside the sandbox execute at the same speed as those outside it. ARMlock imposes virtually no structural constraints on untrusted modules. For example, they can use self-modifying code, receive exceptions, and make system calls. Moreover, system calls can be interposed by ARMlock to enforce the policies set by the host. We have implemented a prototype of ARMlock for Linux that supports the popular ARMv6 and ARMv7 sub-architecture. Our security assessment and performance measurement show that ARMlock is practical, effective, and efficient.", "title": "" } ]
[ { "docid": "0ce83628fefd390862467d0899d20cef", "text": "We address the problem of unsupervised clustering of multidimensional data when the number of clusters is not known a priori. The proposed iterative approach is a stochastic extension of the kNN density-based clustering (KNNCLUST) method which randomly assigns objects to clusters by sampling a posterior class label distribution. In our approach, contextual class-conditional distributions are estimated based on a k nearest neighbors graph, and are iteratively modified to account for current cluster labeling. Posterior probabilities are also slightly reinforced to accelerate convergence to a stationary labeling. A stopping criterion based on the measure of clustering entropy is defined thanks to the Kozachenko-Leonenko differential entropy estimator, computed from current class-conditional entropies. One major advantage of our approach relies in its ability to provide an estimate of the number of clusters present in the data set. The application of our approach to the clustering of real hyperspectral image data is considered. Our algorithm is compared with other unsupervised clustering approaches, namely affinity propagation (AP), KNNCLUST and Non Parametric Stochastic Expectation Maximization (NPSEM), and is shown to improve the correct classification rate in most experiments.", "title": "" }, { "docid": "3bd6bf5f7e9ac02bddb20684c56847bb", "text": "Page flipping is an important part of paper-based document navigation. However this affordance of paper document has not been fully transferred to digital documents. In this paper we present Flipper, a new digital document navigation technique inspired by paper document flipping. Flipper combines speed-dependent automatic zooming (SDAZ) [6] and rapid serial visual presentation (RSVP) [3], to let users navigate through documents at a wide range of speeds. It is particularly well adapted to rapid visual search. User studies show Flipper is faster than both conventional scrolling and SDAZ and is well received by users.", "title": "" }, { "docid": "604362129b2ed5510750cc161cf54bbf", "text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and speed are also important concerns. These are the two main characteristics that differentiate one encryption algorithm from another. This paper provides the performance comparison between four of the most commonly used encryption algorithms: DES(Data Encryption Standard), 3DES(Triple DES), BLOWFISH and AES (Rijndael). The comparison has been conducted by running several setting to process different sizes of data blocks to evaluate the algorithms encryption and decryption speed. Based on the performance analysis of these algorithms under different hardware and software platform, it has been concluded that the Blowfish is the best performing algorithm among the algorithms under the security against unauthorized attack and the speed is taken into consideration.", "title": "" }, { "docid": "15753e152898b07fda8807c670127c72", "text": "The increasing influence of social media and enormous participation of users creates new opportunities to study human social behavior along with the capability to analyze large amount of data streams. One of the interesting problems is to distinguish between different kinds of users, for example users who are leaders and introduce new issues and discussions on social media. Furthermore, positive or negative attitudes can also be inferred from those discussions. Such problems require a formal interpretation of social media logs and unit of information that can spread from person to person through the social network. Once the social media data such as user messages are parsed and network relationships are identified, data mining techniques can be applied to group different types of communities. However, the appropriate granularity of user communities and their behavior is hardly captured by existing methods. In this paper, we present a framework for the novel task of detecting communities by clustering messages from large streams of social data. Our framework uses K-Means clustering algorithm along with Genetic algorithm and Optimized Cluster Distance (OCD) method to cluster data. The goal of our proposed framework is twofold that is to overcome the problem of general K-Means for choosing best initial centroids using Genetic algorithm, as well as to maximize the distance between clusters by pairwise clustering using OCD to get an accurate clusters. We used various cluster validation metrics to evaluate the performance of our algorithm. The analysis shows that the proposed method gives better clustering results and provides a novel use-case of grouping user communities based on their activities. Our approach is optimized and scalable for real-time clustering of social media data.", "title": "" }, { "docid": "5caa0646c0d5b1a2a0c799e048b5557a", "text": "The goal of this research is to find the efficient and most widely used cryptographic algorithms form the history, investigating one of its merits and demerits which have not been modified so far. Perception of cryptography, its techniques such as transposition & substitution and Steganography were discussed. Our main focus is on the Playfair Cipher, its advantages and disadvantages. Finally, we have proposed a few methods to enhance the playfair cipher for more secure and efficient cryptography.", "title": "" }, { "docid": "680be905a0f01e26e608ba7b4b79a94e", "text": "A cost-effective position measurement system based on optical mouse sensors is presented in this work. The system is intended to be used in a planar positioning stage for microscopy applications and as such, has strict resolution, accuracy, repeatability, and sensitivity requirements. Three techniques which improve the measurement system's performance in the context of these requirements are proposed; namely, an optical magnification of the image projected onto the mouse sensor, a periodic homing procedure to reset the error buildup, and a compensation of the undesired dynamics caused by filters implemented in the mouse sensor chip.", "title": "" }, { "docid": "9726da1503df569b4e6442f4f2fa8a28", "text": "An improved firefly algorithm (FA)-based band selection method is proposed for hyperspectral dimensionality reduction (DR). In this letter, DR is formulated as an optimization problem that searches a small number of bands from a hyperspectral data set, and a feature subset search algorithm using the FA is developed. To avoid employing an actual classifier within the band searching process to greatly reduce computational cost, criterion functions that can gauge class separability are preferred; specifically, the minimum estimated abundance covariance and Jeffreys-Matusita distances are employed. The proposed band selection technique is compared with an FA-based method that actually employs a classifier, the well-known sequential forward selection, and particle swarm optimization algorithms. Experimental results show that the proposed algorithm outperforms others, providing an effective option for DR.", "title": "" }, { "docid": "ad5b787fd972c202a69edc98a8fbc7ba", "text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.", "title": "" }, { "docid": "42d6072e6cff71043e345f474d880c18", "text": "The main purpose of this research is to design and develop complete system of a remote-operated multi-direction Unmanned Ground Vehicle (UGV). The development involved PIC microcontroller in remote-controlled and UGV robot, Xbee Pro modules, Graphic LCD 84×84, Vexta brushless DC electric motor and mecanum wheels. This paper show the study the movement of multidirectional UGV by using Mecanum wheels with differences drive configuration. The 16-bits Microchips microcontroller were used in the UGV's system that embed with Xbee Pro through variable baud-rate value via UART protocol and control the direction of wheels. The successful develop UGV demonstrated clearly the potential application of this type of vehicle, and incorporated the necessary technology for further research of this type of vehicle.", "title": "" }, { "docid": "3306636800566050599f051b0939b755", "text": "We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network-joint network with the CNN for ImageQA and the parameter prediction network-is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks.", "title": "" }, { "docid": "76596bc5d7b20fd746bff65e8c1447e5", "text": "Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a ‘local Nash equilibrium’ (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or classifier. This paper proposes to model GANs explicitly as finite games in mixed strategies, thereby ensuring that every LNE is an NE. We use the Parallel Nash Memory as a solution method, which is proven to monotonically converge to a resource-bounded Nash equilibrium. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse and produces solutions that are less exploitable than those produced by GANs and MGANs.", "title": "" }, { "docid": "7057f72a1ce2e92ae01785d5b6e4a1d5", "text": "Social transmission is everywhere. Friends talk about restaurants , policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others? Traditionally, researchers have argued that rumors spread in the \" 3 Cs \" —times of conflict, crisis, and catastrophe (e.g., wars or natural disasters; Koenig, 1985)―and the major explanation for this phenomenon has been generalized anxiety (i.e., apprehension about negative outcomes). Such theories can explain why rumors flourish in times of panic, but they are less useful in explaining the prevalence of rumors in positive situations, such as the Cannes Film Festival or the dot-com boom. Further, although recent work on the social sharing of emotion suggests that positive emotion may also increase transmission, why emotions drive sharing and why some emotions boost sharing more than others remains unclear. I suggest that transmission is driven in part by arousal. Physiological arousal is characterized by activation of the autonomic nervous system (Heilman, 1997), and the mobilization provided by this excitatory state may boost sharing. This hypothesis not only suggests why content that evokes more of certain emotions (e.g., disgust) may be shared more than other a review), but also suggests a more precise prediction , namely, that emotions characterized by high arousal, such as anxiety or amusement (Gross & Levenson, 1995), will boost sharing more than emotions characterized by low arousal, such as sadness or contentment. This idea was tested in two experiments. They examined how manipulations that increase general arousal (i.e., watching emotional videos or jogging in place) affect the social transmission of unrelated content (e.g., a neutral news article). If arousal increases transmission, even incidental arousal (i.e., outside the focal content being shared) should spill over and boost sharing. In the first experiment, 93 students completed what they were told were two unrelated studies. The first evoked specific emotions by using film clips validated in prior research (Christie & Friedman, 2004; Gross & Levenson, 1995). Participants in the control condition watched a neutral clip; those in the experimental conditions watched an emotional clip. Emotional arousal and valence were manipulated independently so that high-and low-arousal emotions of both a positive (amusement vs. contentment) and a negative (anxiety vs. …", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "4d0185efbe22d65e5bb8bbf0a31fe51c", "text": "Determining the polarity of a sentimentbearing expression requires more than a simple bag-of-words approach. In particular, words or constituents within the expression can interact with each other to yield a particular overall polarity. In this paper, we view such subsentential interactions in light of compositional semantics, and present a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedure. Our experiments show that (1) simple heuristics based on compositional semantics can perform better than learning-based methods that do not incorporate compositional semantics (accuracy of 89.7% vs. 89.1%), but (2) a method that integrates compositional semantics into learning performs better than all other alternatives (90.7%). We also find that “contentword negators”, not widely employed in previous work, play an important role in determining expression-level polarity. Finally, in contrast to conventional wisdom, we find that expression-level classification accuracy uniformly decreases as additional, potentially disambiguating, context is considered.", "title": "" }, { "docid": "c0af64c6c2b72ab4cca04a3250a8c0cb", "text": "Omega-3 polyunsaturated fatty acids such as eicosapentaenoic acid and docosahexaenoic acid have beneficial effects in many inflammatory disorders. Although the mechanism of eicosapentaenoic acid and docosahexaenoic acid action is still not fully defined in molecular terms, recent studies have revealed that, during the course of acute inflammation, omega-3 polyunsaturated fatty acid-derived anti-inflammatory mediators including resolvins and protectins are produced. This review presents recent advances in understanding the formation and action of these mediators, especially focusing on the LC-MS/MS-based lipidomics approach and recently identified bioactive products with potent anti-inflammatory property.", "title": "" }, { "docid": "67ae045b8b9a8e181ed0a33b204528cf", "text": "We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on rule-irrelevant features, and attention to exemplar similarity was detrimental to task performance. Comparison of results across studies suggests that the effects are mandatory, non-strategic and not subject to conscious control, and as a result, should be pervasive throughout categorization.", "title": "" }, { "docid": "1f62f4d5b84de96583e17fdc0f4828be", "text": "This study examined age differences in perceptions of online communities held by people who were not yet participating in these relatively new social spaces. Using the Technology Acceptance Model (TAM), we investigated the factors that affect future intention to participate in online communities. Our results supported the proposition that perceived usefulness positively affects behavioral intention, yet it was determined that perceived ease of use was not a significant predictor of perceived usefulness. The study also discovered negative relationships between age and Internet self-efficacy and the perceived quality of online community websites. However, the moderating role of age was not found. The findings suggest that the relationships among perceived ease of use, perceived usefulness, and intention to participate in online communities do not change with age. Theoretical and practical implications and limitations were discussed. ! 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6e28ce874571ef5db8f5e44ff78488d2", "text": "The importance of the maintenance function has increased because of its role in keeping and improving system availability and safety, as well as product quality. To support this role, the development of the communication and information technologies has allowed the emergence of the concept of e-maintenance. Within the era of e-manufacturing and e-business, e-maintenance provides the opportunity for a new maintenance generation. As we will discuss later in this paper, e-maintenance integrates existing telemaintenance principles, with Web services and modern e-collaboration principles. Collaboration allows to share and exchange not only information but also knowledge and (e)-intelligence. By means of a collaborative environment, pertinent knowledge and intelligence become available and usable at the right place and time, in order to facilitate reaching the best maintenance decisions. This paper outlines the basic ideas within the e-maintenance concept and then provides an overview of the current research and challenges in this emerging field. An underlying objective is to identify the industrial/academic actors involved in the technological, organizational or management issues related to the development of e-maintenance. Today, this heterogeneous community has to be federated in order to bring up e-maintenance as a new scientific discipline. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "23b18b2795b0e5ff619fd9e88821cfad", "text": "Goal-oriented dialogue has been paid attention for its numerous applications in artificial intelligence. To solve this task, deep learning and reinforcement learning have recently been applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose “Answerer in Questioner’s Mind” (AQM), a novel algorithm for goal-oriented dialogue. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer’s intent via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialogue tasks: “MNIST Counting Dialog” and “GuessWhat?!.” In our experiments, AQM outperforms comparative algorithms and makes human-like dialogue. We further use AQM as a tool for analyzing the mechanism of deep reinforcement learning approach and discuss the future direction of practical goal-oriented neural dialogue systems.", "title": "" }, { "docid": "c098ef9c2302ce30e4d025c100cbcaa4", "text": "BACKGROUND\nDengue is re-emerging throughout the tropical world, causing frequent recurrent epidemics. The initial clinical manifestation of dengue often is confused with other febrile states confounding both clinical management and disease surveillance. Evidence-based triage strategies that identify individuals likely to be in the early stages of dengue illness can direct patient stratification for clinical investigations, management, and virological surveillance. Here we report the identification of algorithms that differentiate dengue from other febrile illnesses in the primary care setting and predict severe disease in adults.\n\n\nMETHODS AND FINDINGS\nA total of 1,200 patients presenting in the first 72 hours of acute febrile illness were recruited and followed up for up to a 4-week period prospectively; 1,012 of these were recruited from Singapore and 188 from Vietnam. Of these, 364 were dengue RT-PCR positive; 173 had dengue fever, 171 had dengue hemorrhagic fever, and 20 had dengue shock syndrome as final diagnosis. Using a C4.5 decision tree classifier for analysis of all clinical, haematological, and virological data, we obtained a diagnostic algorithm that differentiates dengue from non-dengue febrile illness with an accuracy of 84.7%. The algorithm can be used differently in different disease prevalence to yield clinically useful positive and negative predictive values. Furthermore, an algorithm using platelet count, crossover threshold value of a real-time RT-PCR for dengue viral RNA, and presence of pre-existing anti-dengue IgG antibodies in sequential order identified cases with sensitivity and specificity of 78.2% and 80.2%, respectively, that eventually developed thrombocytopenia of 50,000 platelet/mm(3) or less, a level previously shown to be associated with haemorrhage and shock in adults with dengue fever.\n\n\nCONCLUSION\nThis study shows a proof-of-concept that decision algorithms using simple clinical and haematological parameters can predict diagnosis and prognosis of dengue disease, a finding that could prove useful in disease management and surveillance.", "title": "" } ]
scidocsrr
ff6f4de81ce23bf1c5bcba5c2e1be9ab
The relational self: an interpersonal social-cognitive theory.
[ { "docid": "cfddb85a8c81cb5e370fe016ea8d4c5b", "text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.", "title": "" } ]
[ { "docid": "d5868da2fedb7498a9d6454ed939408c", "text": "over concrete thinking Understand that virtual objects are computer generated, and they do not need to obey physical laws", "title": "" }, { "docid": "4f069eeff7cf99679fb6f31e2eea55f0", "text": "The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%. Keywords—Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization", "title": "" }, { "docid": "f8d44bd997e8af8d0ad23450790c1fec", "text": "We report on the design objectives and initial design of a new discrete-event network simulator for the research community. Creating Yet Another Network Simulator (yans, http://yans.inria.fr/yans) is not the sort of prospect network researchers are happy to contemplate, but this effort may be timely given that ns-2 is considering a major revision and is evaluating new simulator cores. We describe why we did not choose to build on existing tools such as ns-2, GTNetS, and OPNET, outline our functional requirements, provide a high-level view of the architecture and core components, and describe a new IEEE 802.11 model provided with yans.", "title": "" }, { "docid": "35eb5c51ff22ae0c350e5fc4eb8faa43", "text": "We propose gradient adversarial training, an auxiliary deep learning framework applicable to different machine learning problems. In gradient adversarial training, we leverage a prior belief that in many contexts, simultaneous gradient updates should be statistically indistinguishable from each other. We enforce this consistency using an auxiliary network that classifies the origin of the gradient tensor, and the main network serves as an adversary to the auxiliary network in addition to performing standard task-based training. We demonstrate gradient adversarial training for three different scenarios: (1) as a defense to adversarial examples we classify gradient tensors and tune them to be agnostic to the class of their corresponding example, (2) for knowledge distillation, we do binary classification of gradient tensors derived from the student or teacher network and tune the student gradient tensor to mimic the teacher’s gradient tensor; and (3) for multi-task learning we classify the gradient tensors derived from different task loss functions and tune them to be statistically indistinguishable. For each of the three scenarios we show the potential of gradient adversarial training procedure. Specifically, gradient adversarial training increases the robustness of a network to adversarial attacks, is able to better distill the knowledge from a teacher network to a student network compared to soft targets, and boosts multi-task learning by aligning the gradient tensors derived from the task specific loss functions. Overall, our experiments demonstrate that gradient tensors contain latent information about whatever tasks are being trained, and can support diverse machine learning problems when intelligently guided through adversarialization using a auxiliary network.", "title": "" }, { "docid": "f95863031edd888b9f841cde0af4c9be", "text": "The research tries to identify factors that are critical for a Big Data project’s success. In total 27 success factors could be identified throughout the analysis of these published case studies. Subsequently, to the identification the success factors were categorized according to their importance for the project’s success. During the categorization process 6 out of the 27 success factors were declared mission critical. Besides this identification of success factors, this thesis provides a process model, as a suggested way to approach Big Data projects. The process model is divided into separate phases. In addition to a description of the tasks to fulfil, the identified success factors are assigned to the individual phases of the analysis process. Finally, this thesis provides a process model for Big Data projects and also assigns success factors to individual process stages, which are categorized according to their importance for the success of the entire project.", "title": "" }, { "docid": "3849284adb68f41831434afbf23be9ed", "text": "Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.", "title": "" }, { "docid": "155c692223bf8698278023c04e07f135", "text": "Structure-function studies with mammalian reoviruses have been limited by the lack of a reverse-genetic system for engineering mutations into the viral genome. To circumvent this limitation in a partial way for the major outer-capsid protein sigma3, we obtained in vitro assembly of large numbers of virion-like particles by binding baculovirus-expressed sigma3 protein to infectious subvirion particles (ISVPs) that lack sigma3. A level of sigma3 binding approaching 100% of that in native virions was routinely achieved. The sigma3 coat in these recoated ISVPs (rcISVPs) appeared very similar to that in virions by electron microscopy and three-dimensional image reconstruction. rcISVPs retained full infectivity in murine L cells, allowing their use to study sigma3 functions in virus entry. Upon infection, rcISVPs behaved identically to virions in showing an extended lag phase prior to exponential growth and in being inhibited from entering cells by either the weak base NH4Cl or the cysteine proteinase inhibitor E-64. rcISVPs also mimicked virions in being incapable of in vitro activation to mediate lysis of erythrocytes and transcription of the viral mRNAs. Last, rcISVPs behaved like virions in showing minor loss of infectivity at 52 degrees C. Since rcISVPs contain virion-like levels of sigma3 but contain outer-capsid protein mu1/mu1C mostly cleaved at the delta-phi junction as in ISVPs, the fact that rcISVPs behaved like virions (and not ISVPs) in all of the assays that we performed suggests that sigma3, and not the delta-phi cleavage of mu1/mu1C, determines the observed differences in behavior between virions and ISVPs. To demonstrate the applicability of rcISVPs for genetic studies of protein functions in reovirus entry (an approach that we call recoating genetics), we used chimeric sigma3 proteins to localize the primary determinants of a strain-dependent difference in sigma3 cleavage rate to a carboxy-terminal region of the ISVP-bound protein.", "title": "" }, { "docid": "a69afd6dc7b2f1bc6ce00dce9e395559", "text": "We present a family with a Robertsonian translocation (RT) 15;21 and an inv(21)(q21.1q22.1) which was ascertained after the birth of a child with Down syndrome. Karyotyping revealed a translocation trisomy 21 in the patient. The mother was a carrier of a paternally inherited RT 15;21. Additionally, she and her mother showed a rare paracentric inversion of chromosome 21 which could not be observed in the Down syndrome patient. Thus, we concluded that the two free chromosomes 21 in the patient were of paternal origin. Remarkably, short tandem repeat (STR) typing revealed that the proband showed one paternal allele but two maternal alleles, indicating a maternal origin of the supernumerary chromosome 21. Due to the fact that chromosome analysis showed structurally normal chromosomes 21, a re-inversion of the free maternally inherited chromosome 21 must have occurred. Re-inversion and meiotic segregation error may have been co-incidental but unrelated events. Alternatively, the inversion or RT could have predisposed to maternal non-disjunction.", "title": "" }, { "docid": "a607addf74880bcbfc2f097ae4c06a31", "text": "In this paper, we take an input-output approach to enhance the study of cooperative multiagent optimization problems that admit decentralized and selfish solutions, hence eliminating the need for an interagent communication network. The framework under investigation is a set of $n$ independent agents coupled only through an overall cost that penalizes the divergence of each agent from the average collective behavior. In the case of identical agents, or more generally agents with identical essential input-output dynamics, we show that optimal decentralized and selfish solutions are possible in a variety of standard input-output cost criteria. These include the cases of $\\ell_{1}, \\ell_{2}, \\ell_{\\infty}$ induced, and $\\mathcal{H}_{2}$ norms for any finite $n$. Moreover, if the cost includes non-deviation from average variables, the above results hold true as well for $\\ell_{1}, \\ell_{2}, \\ell_{\\infty}$ induced norms and any $n$, while they hold true for the normalized, per-agent square $\\mathcal{H}_{2}$ norm, cost as $n\\rightarrow\\infty$. We also consider the case of nonidentical agent dynamics and prove that similar results hold asymptotically as $n\\rightarrow\\infty$ in the case of $\\ell_{2}$ induced norms (i.e., $\\mathcal{H}_{\\infty}$) under a growth assumption on the $\\mathcal{H}_{\\infty}$ norm of the essential dynamics of the collective.", "title": "" }, { "docid": "02cd879a83070af9842999c7215e7f92", "text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.", "title": "" }, { "docid": "09f19a5e4751dc3ee4aa38817aafd3cf", "text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013", "title": "" }, { "docid": "8db565f101f8403b8107805731ba1f80", "text": "Presents a collection of slides covering the following topics:issues and challenges in power distribution network design; basics of power supply induced jitter (PSIJ) modeling; PSIJ design and modeling for key applications; and memory and parallel bus interfaces (serial links and digital logic timing).", "title": "" }, { "docid": "e4d58b9b8775f2a30bc15fceed9cd8bf", "text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.", "title": "" }, { "docid": "55b2465349e4965a35b4c894c5545afb", "text": "Context-awareness is a key concept in ubiquitous computing. But to avoid developing dedicated context-awareness sub-systems for specific application areas there is a need for more generic programming frameworks. Such frameworks can help the programmer to develop and deploy context-aware applications faster. This paper describes the Java Context-Awareness Framework – JCAF, which is a Java-based context-awareness infrastructure and programming API for creating context-aware computer applications. The paper presents the design principles behind JCAF, its runtime architecture, and its programming API. The paper presents some applications of using JCAF in three different applications and discusses lessons learned from using JCAF.", "title": "" }, { "docid": "201f576423ed88ee97d1505b6d5a4d3f", "text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.", "title": "" }, { "docid": "d2edbca2ed1e4952794d97f6e34e02e4", "text": "In today’s world, almost everybody is affluent with computers and network based technology is growing by leaps and bounds. So, network security has become very important, rather an inevitable part of computer system. An Intrusion Detection System (IDS) is designed to detect system attacks and classify system activities into normal and abnormal form. Machine learning techniques have been applied to intrusion detection systems which have an important role in detecting Intrusions. This paper reviews different machine approaches for Intrusion detection system. This paper also presents the system design of an Intrusion detection system to reduce false alarm rate and improve accuracy to detect intrusion.", "title": "" }, { "docid": "8695757545e44358fd63f06936335903", "text": "We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.", "title": "" }, { "docid": "474134af25f1a5cd95b3bc29b3df8be4", "text": "The challenge of combatting malware designed to breach air-gap isolation in order to leak data.", "title": "" }, { "docid": "b216a38960c537d52d94adc8d50a43df", "text": "BACKGROUND\nAutologous platelet-rich plasma has attracted attention in various medical fields recently, including orthopedic, plastic, and dental surgeries and dermatology for its wound healing ability. Further, it has been used clinically in mesotherapy for skin rejuvenation.\n\n\nOBJECTIVE\nIn this study, the effects of activated platelet-rich plasma (aPRP) and activated platelet-poor plasma (aPPP) have been investigated on the remodelling of the extracellular matrix, a process that requires activation of dermal fibroblasts, which is essential for rejuvenation of aged skin.\n\n\nMETHODS\nPlatelet-rich plasma (PRP) and platelet-poor plasma (PPP) were prepared using a double-spin method and then activated with thrombin and calcium chloride. The proliferative effects of aPRP and aPPP were measured by [(3)H]thymidine incorporation assay, and their effects on matrix protein synthesis were assessed by quantifying levels of procollagen type I carboxy-terminal peptide (PIP) by enzyme-linked immunosorbent assay (ELISA). The production of collagen and matrix metalloproteinases (MMP) was studied by Western blotting and reverse transcriptase-polymerase chain reaction.\n\n\nRESULTS\nPlatelet numbers in PRP increased to 9.4-fold over baseline values. aPRP and aPPP both stimulated cell proliferation, with peak proliferation occurring in cells grown in 5% aPRP. Levels of PIP were highest in cells grown in the presence of 5% aPRP. Additionally, aPRP and aPPP increased the expression of type I collagen, MMP-1 protein, and mRNA in human dermal fibroblasts.\n\n\nCONCLUSION\naPRP and aPPP promote tissue remodelling in aged skin and may be used as adjuvant treatment to lasers for skin rejuvenation in cosmetic dermatology.", "title": "" }, { "docid": "d4896aa12be18aea9a6639422ee12d92", "text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.", "title": "" } ]
scidocsrr
4f1f89811a3891b2e81d9aae26096368
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components
[ { "docid": "88a1549275846a4fab93f5727b19e740", "text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.", "title": "" } ]
[ { "docid": "f9d44eac4e07ed72e59d1aa194105615", "text": "Each human intestine harbours not only hundreds of trillions of bacteria but also bacteriophage particles, viruses, fungi and archaea, which constitute a complex and dynamic ecosystem referred to as the gut microbiota. An increasing number of data obtained during the last 10 years have indicated changes in gut bacterial composition or function in type 2 diabetic patients. Analysis of this ‘dysbiosis’ enables the detection of alterations in specific bacteria, clusters of bacteria or bacterial functions associated with the occurrence or evolution of type 2 diabetes; these bacteria are predominantly involved in the control of inflammation and energy homeostasis. Our review focuses on two key questions: does gut dysbiosis truly play a role in the occurrence of type 2 diabetes, and will recent discoveries linking the gut microbiota to host health be helpful for the development of novel therapeutic approaches for type 2 diabetes? Here we review how pharmacological, surgical and nutritional interventions for type 2 diabetic patients may impact the gut microbiota. Experimental studies in animals are identifying which bacterial metabolites and components act on host immune homeostasis and glucose metabolism, primarily by targeting intestinal cells involved in endocrine and gut barrier functions. We discuss novel approaches (e.g. probiotics, prebiotics and faecal transfer) and the need for research and adequate intervention studies to evaluate the feasibility and relevance of these new therapies for the management of type 2 diabetes.", "title": "" }, { "docid": "8f9f1bdc6f41cb5fd8b285a9c41526c1", "text": "The rivalry between the cathode-ray tube and flat-panel displays (FPDs) has intensified as performance of some FPDs now exceeds that of that entrenched leader in many cases. Besides the wellknown active-matrix-addressed liquid-crystal display, plasma, organic light-emitting diodes, and liquid-crystal-on-silicon displays are now finding new applications as the manufacturing, process engineering, materials, and cost structures become standardized and suitable for large markets.", "title": "" }, { "docid": "2ab2280b7821ae6ad27fff995fd36fe0", "text": "Recent years have seen the development of a satellite communication system called a high-throughput satellite (HTS), which enables large-capacity communication to cope with various communication demands. Current HTSs have a fixed allocation of communication resources and cannot flexibly change this allocation during operation. Thus, effectively allocating communication resources for communication demands with a bias is not possible. Therefore, technology is being developed to add flexibility to satellite communication systems, but there is no system analysis model available to quantitatively evaluate the flexibility performance. In this study, we constructed a system analysis model to quantitatively evaluate the flexibility of a satellite communication system and used it to analyze a satellite communication system equipped with a digital channelizer.", "title": "" }, { "docid": "9fc6244b3d0301a8486d44d58cf95537", "text": "The aim of this paper is to explore some, ways of linking ethnographic studies of work in context with the design of CSCW systems. It uses examples from an interdisciplinary collaborative project on air traffic control. Ethnographic methods are introduced, and applied to identifying the social organization of this cooperative work, and the use of instruments within it. On this basis some metaphors for the electronic representation of current manual practices are presented, and their possibilities and limitations are discussed.", "title": "" }, { "docid": "d57072f4ffa05618ebf055824e7ae058", "text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.", "title": "" }, { "docid": "2e16ba9c13525dee6831d0a5c66a0671", "text": "1.1 Equivalent de nitions of a stable distribution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Properties of stable random variables : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :10 1.3 Symmetric -stable random variables : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :20 1.4 Series representation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 1.5 Series representation of skewed -stable random variables : : : : : : : : : : : : : : : : : : : : : : : : 30 1.6 Graphs and tables of -stable densities and c.d.f.'s : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :35 1.7 Simulation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :41 1.8 Exercises : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 49", "title": "" }, { "docid": "0eb3d3c33b62c04ed5d34fc3a38b5182", "text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.", "title": "" }, { "docid": "d16ec1f4c32267a07b1453d45bc8a6f2", "text": "Knowledge representation learning (KRL), exploited by various applications such as question answering and information retrieval, aims to embed the entities and relations contained by the knowledge graph into points of a vector space such that the semantic and structure information of the graph is well preserved in the representing space. However, the previous works mainly learned the embedding representations by treating each entity and relation equally which tends to ignore the inherent imbalance and heterogeneous properties existing in knowledge graph. By visualizing the representation results obtained from classic algorithm TransE in detail, we reveal the disadvantages caused by this homogeneous learning strategy and gain insight of designing policy for the homogeneous representation learning. In this paper, we propose a novel margin-based pairwise representation learning framework to be incorporated into many KRL approaches, with the method of introducing adaptivity according to the degree of knowledge heterogeneity. More specially, an adaptive margin appropriate to separate the real samples from fake samples in the embedding space is first proposed based on the sample’s distribution density, and then an adaptive weight is suggested to explicitly address the trade-off between the different contributions coming from the real and fake samples respectively. The experiments show that our Adaptive Weighted Margin Learning (AWML) framework can help the previous work achieve a better performance on real-world Knowledge Graphs Freebase and WordNet in the tasks of both link prediction and triplet classification.", "title": "" }, { "docid": "b6fdde5d6baeb546fd55c749af14eec1", "text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.", "title": "" }, { "docid": "9ea9b364e2123d8917d4a2f25e69e084", "text": "Movement observation and imagery are increasingly propagandized for motor rehabilitation. Both observation and imagery are thought to improve motor function through repeated activation of mental motor representations. However, it is unknown what stimulation parameters or imagery conditions are optimal for rehabilitation purposes. A better understanding of the mechanisms underlying movement observation and imagery is essential for the optimization of functional outcome using these training conditions. This study systematically assessed the corticospinal excitability during rest, observation, imagery and execution of a simple and a complex finger-tapping sequence in healthy controls using transcranial magnetic stimulation (TMS). Observation was conducted passively (without prior instructions) as well as actively (in order to imitate). Imagery was performed visually and kinesthetically. A larger increase in corticospinal excitability was found during active observation in comparison with passive observation and visual or kinesthetic imagery. No significant difference between kinesthetic and visual imagery was found. Overall, the complex task led to a higher corticospinal excitability in comparison with the simple task. In conclusion, the corticospinal excitability was modulated during both movement observation and imagery. Specifically, active observation of a complex motor task resulted in increased corticospinal excitability. Active observation may be more effective than imagery for motor rehabilitation purposes. In addition, the activation of mental motor representations may be optimized by varying task-complexity.", "title": "" }, { "docid": "f03f84bfa290fd3d1df6d9249cd9d8a6", "text": "We suggest a new technique to reduce energy consumption in the processor datapath without sacrificing performance by exploiting operand value locality at run time. Data locality is one of the major characteristics of video streams as well as other commonly used applications. We use a cache-like scheme to store a selective history of computation results, and the resultant Te-e-21se leads to power savings. The cache is indexed by the OpeTandS. Based on OUT model, an 8 to 128 entry execution cache TedUCeS power consumption by 20% to 60%.", "title": "" }, { "docid": "647ff27223a27396ffc15c24c5ff7ef1", "text": "Mobile phones are increasingly used for security sensitive activities such as online banking or mobile payments. This usually involves some cryptographic operations, and therefore introduces the problem of securely storing the corresponding keys on the phone. In this paper we evaluate the security provided by various options for secure storage of key material on Android, using either Android's service for key storage or the key storage solution in the Bouncy Castle library. The security provided by the key storage service of the Android OS depends on the actual phone, as it may or may not make use of ARM TrustZone features. Therefore we investigate this for different models of phones.\n We find that the hardware-backed version of the Android OS service does offer device binding -- i.e. keys cannot be exported from the device -- though they could be used by any attacker with root access. This last limitation is not surprising, as it is a fundamental limitation of any secure storage service offered from the TrustZone's secure world to the insecure world. Still, some of Android's documentation is a bit misleading here.\n Somewhat to our surprise, we find that in some respects the software-only solution of Bouncy Castle is stronger than the Android OS service using TrustZone's capabilities, in that it can incorporate a user-supplied password to secure access to keys and thus guarantee user consent.", "title": "" }, { "docid": "8c28ec4f3dd42dc9d53fed2e930f7a77", "text": "If a theory of concept composition aspires to psychological plausibility, it may first need to address several preliminary issues associated with naturally occurring human concepts: content variability, multiple representational forms, and pragmatic constraints. Not only do these issues constitute a significant challenge for explaining individual concepts, they pose an even more formidable challenge for explaining concept compositions. How do concepts combine as their content changes, as different representational forms become active, and as pragmatic constraints shape processing? Arguably, concepts are most ubiquitous and important in compositions, relative to when they occur in isolation. Furthermore, entering into compositions may play central roles in producing the changes in content, form, and pragmatic relevance observed for individual concepts. Developing a theory of concept composition that embraces and illuminates these issues would not only constitute a significant contribution to the study of concepts, it would provide insight into the nature of human cognition. The human ability to construct and combine concepts is prolific. On the one hand, people acquire tens of thousands of concepts for diverse categories of settings, agents, objects, actions, mental states, bodily states, properties, relations, and so forth. On the other, people combine these concepts to construct infinite numbers of more complex concepts, as the open-ended phrases, sentences, and texts that humans produce effortlessly and ubiquitously illustrate. Major changes in the brain, the emergence of language, and new capacities for social cognition all probably played central roles in the evolution of these impressive conceptual abilities (e.g., Deacon 1997; Donald 1993; Tomasello 2009). In psychology alone, much research addresses human concepts (e.g., Barsalou 2012;Murphy 2002; Smith andMedin 1981) and concept composition (often referred to as conceptual combination; e.g., Costello and Keane 2000; Gagné and Spalding 2014; Hampton 1997; Hampton and Jönsson 2012;Medin and Shoben 1988;Murphy L.W. Barsalou (✉) Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland e-mail: lawrence.barsalou@glasgow.ac.uk © The Author(s) 2017 J.A. Hampton and Y. Winter (eds.), Compositionality and Concepts in Linguistics and Psychology, Language, Cognition, and Mind 3, DOI 10.1007/978-3-319-45977-6_2 9 1988;Wisniewski 1997;Wu andBarsalou 2009).More generally across the cognitive sciences, much additional research addresses concepts and the broader construct of compositionality (for a recent collection, see Werning et al. 2012). 1 Background Framework A grounded approach to concepts. Here I assume that a concept is a dynamical distributed network in the brain coupled with a category in the environment or experience, with this network guiding situated interactions with the category’s instances (for further detail, see Barsalou 2003b, 2009, 2012, 2016a, 2016b). The concept of bicycle, for example, represents and guides interactions with the category of bicycles in the world. Across interactions with a category’s instances, a concept develops in memory by aggregating information from perception, action, and internal states. Thus, the concept of bicycle develops from aggregating multimodal information related to bicycles across the situations in which they are experienced. As a consequence of using selective attention to extract information relevant to the concept of bicycle from the current situation (e.g., a perceived bicycle), and then using integration mechanisms to integrate it with other bicycle information already in memory, aggregate information for the category develops continually (Barsalou 1999). As described later, however, background situational knowledge is also captured that plays important roles in conceptual processing (Barsalou 2016b, 2003b; Yeh and Barsalou 2006). Although learning plays central roles in establishing concepts, genetic and epigenetic processes constrain the features that can be represented for a concept, and also their integration in the brain’s association areas (e.g., Simmons and Barsalou 2003). For example, biologically-based neural circuits may anticipate the conceptual structure of evolutionarily important concepts, such as agents, minds, animals, foods, and tools. Once the conceptual system is in place, it supports virtually all other forms of cognitive activity, both online in the current situation and offline when representing the world in language, memory, and thought (e.g., Barsalou 2012, 2016a, 2016b). From the perspective developed here, when conceptual knowledge is needed for a task, concepts produce situation-specific simulations of the relevant category dynamically, where a simulation attempts to reenact the kind of neural and bodily states associated with processing the category. On needing conceptual knowledge about bicycles, for example, a small subset of the distributed bicycle network in the brain becomes active to simulate what it would be like to interact with an actual bicycle. This multimodal simulation provides anticipatory inferences about what is likely to be perceived further for the bicycle in the current situation, how to interact with it effectively, and what sorts of internal states might result (Barsalou 2009). The specific bicycle simulation that becomes active is one of infinitely many simulations that could be constructed dynamically from the bicycle network—the entire network never becomes fully active. Typically, simulations remain unconscious, at least to a large extent, while causally influencing cognition, affect, and 10 L.W. Barsalou", "title": "" }, { "docid": "cade9bc367068728bde84df622034b46", "text": "Authentication is an important topic in cloud computing security. That is why various authentication techniques in cloud environment are presented in this paper. This process serves as a protection against different sorts of attacks where the goal is to confirm the identity of a user and the user requests services from cloud servers. Multiple authentication technologies have been put forward so far that confirm user identity before giving the permit to access resources. Each of these technologies (username and password, multi-factor authentication, mobile trusted module, public key infrastructure, single sign-on, and biometric authentication) is at first described in here. The different techniques presented will then be compared. Keywords— Cloud computing, security, authentication, access control,", "title": "" }, { "docid": "f022871509e863f6379d76ba80afaa2f", "text": "Neuroeconomics seeks to gain a greater understanding of decision making by combining theoretical and methodological principles from the fields of psychology, economics, and neuroscience. Initial studies using this multidisciplinary approach have found evidence suggesting that the brain may be employing multiple levels of processing when making decisions, and this notion is consistent with dual-processing theories that have received extensive theoretical consideration in the field of cognitive psychology, with these theories arguing for the dissociation between automatic and controlled components of processing. While behavioral studies provide compelling support for the distinction between automatic and controlled processing in judgment and decision making, less is known if these components have a corresponding neural substrate, with some researchers arguing that there is no evidence suggesting a distinct neural basis. This chapter will discuss the behavioral evidence supporting the dissociation between automatic and controlled processing in decision making and review recent literature suggesting potential neural systems that may underlie these processes.", "title": "" }, { "docid": "f08b294c1107372d81c39f13ee2caa34", "text": "The success of deep learning methodologies draws a huge attention to their applications in medical image analysis. One of the applications of deep learning is in segmentation of retinal vessel and severity classification of diabetic retinopathy (DR) from retinal funduscopic image. This paper studies U-Net model performance in segmenting retinal vessel with different settings of dropout and batch normalization and use it to investigate the effect of retina vessel in DR classification. Pre-trained Inception V1 network was used to classify the DR severity. Two sets of retinal images, with and without the presence of vessel, were created from MESSIDOR dataset. The vessel extraction process was done using the best trained U-Net on DRIVE dataset. Final analysis showed that retinal vessel is a good feature in classifying both severe and early cases of DR stage.", "title": "" }, { "docid": "950d7d10b09f5d13e09692b2a4576c00", "text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.", "title": "" }, { "docid": "4922c751dded99ca83e19d51eb5d647e", "text": "The viewpoint consistency constraint requires that the locations of all object features in an image must be consistent with projection from a single viewpoint. The application of this constraint is central to the problem of achieving robust recognition, since it allows the spatial information in an image to be compared with prior knowledge of an object's shape to the full degree of available image resolution. In addition, the constraint greatly reduces the size of the search space during model-based matching by allowing a few initial matches to provide tight constraints for the locations of other model features. Unfortunately, while simple to state, this constraint has seldom been effectively applied in model-based computer vision systems. This paper reviews the history of attempts to make use of the viewpoint consistency constraint and then describes a number of new techniques for applying it to the process of model-based recognition. A method is presented for probabilistically evaluating new potential matches to extend and refine an initial viewpoint estimate. This evaluation allows the model-based verification process to proceed without the expense of backtracking or search. It will be shown that the effective application of the viewpoint consistency constraint, in conjunction with bottom-up image description based upon principles of perceptual organization, can lead to robust three-dimensional object recognition from single gray-scale images.", "title": "" }, { "docid": "7bb17491cb10db67db09bc98aba71391", "text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.", "title": "" }, { "docid": "e56af4a3a8fbef80493d77b441ee1970", "text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.", "title": "" } ]
scidocsrr
b0c429d2600073cac40209bcd9c28b55
Fast Image Inpainting Based on Coherence Transport
[ { "docid": "da237e14a3a9f6552fc520812073ee6c", "text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.", "title": "" } ]
[ { "docid": "124fa48e1e842f2068a8fb55a2b8bb8e", "text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.", "title": "" }, { "docid": "93bad64439be375200cce65a37c6b8c6", "text": "The mobile social network (MSN) combines techniques in social science and wireless communications for mobile networking. The MSN can be considered as a system which provides a variety of data delivery services involving the social relationship among mobile users. This paper presents a comprehensive survey on the MSN specifically from the perspectives of applications, network architectures, and protocol design issues. First, major applications of the MSN are reviewed. Next, different architectures of the MSN are presented. Each of these different architectures supports different data delivery scenarios. The unique characteristics of social relationship in MSN give rise to different protocol design issues. These research issues (e.g., community detection, mobility, content distribution, content sharing protocols, and privacy) and the related approaches to address data delivery in the MSN are described. At the end, several important research directions are outlined.", "title": "" }, { "docid": "d5019a5536950482e166d68dc3a7cac7", "text": "Co-contamination of the environment with toxic chlorinated organic and heavy metal pollutants is one of the major problems facing industrialized nations today. Heavy metals may inhibit biodegradation of chlorinated organics by interacting with enzymes directly involved in biodegradation or those involved in general metabolism. Predictions of metal toxicity effects on organic pollutant biodegradation in co-contaminated soil and water environments is difficult since heavy metals may be present in a variety of chemical and physical forms. Recent advances in bioremediation of co-contaminated environments have focussed on the use of metal-resistant bacteria (cell and gene bioaugmentation), treatment amendments, clay minerals and chelating agents to reduce bioavailable heavy metal concentrations. Phytoremediation has also shown promise as an emerging alternative clean-up technology for co-contaminated environments. However, despite various investigations, in both aerobic and anaerobic systems, demonstrating that metal toxicity hampers the biodegradation of the organic component, a paucity of information exists in this area of research. Therefore, in this review, we discuss the problems associated with the degradation of chlorinated organics in co-contaminated environments, owing to metal toxicity and shed light on possible improvement strategies for effective bioremediation of sites co-contaminated with chlorinated organic compounds and heavy metals.", "title": "" }, { "docid": "a009519d1ed930d40db593542e7c3e0d", "text": "With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.", "title": "" }, { "docid": "de1529bcfee8a06969ee35318efe3dc3", "text": "This paper studies the prediction of head pose from still images, and summarizes the outcome of a recently organized competition, where the task was to predict the yaw and pitch angles of an image dataset with 2790 samples with known angles. The competition received 292 entries from 52 participants, the best ones clearly exceeding the state-of-the-art accuracy. In this paper, we present the key methodologies behind selected top methods, summarize their prediction accuracy and compare with the current state of the art.", "title": "" }, { "docid": "95bbe5d13f3ca5f97d01f2692a9dc77a", "text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.", "title": "" }, { "docid": "2efe5c0228e6325cdbb8e0922c19924f", "text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.", "title": "" }, { "docid": "94bd0b242079d2b82c141e9f117154f7", "text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.", "title": "" }, { "docid": "82cf8d72eebcc7cfa424cf09ed80d025", "text": "Along with its numerous benefits, the Internet also created numerous ways to compromise the security and stability of the systems connected to it. In 2003, 137529 incidents were reported to CERT/CC © while in 1999, there were 9859 reported incidents (CERT/CC©, 2003). Operations, which are primarily designed to protect the availability, confidentiality and integrity of critical network information systems, are considered to be within the scope of security management. Security management operations protect computer networks against denial-of-service attacks, unauthorized disclosure of information, and the modification or destruction of data. Moreover, the automated detection and immediate reporting of these events are required in order to provide the basis for a timely response to attacks (Bass, 2000). Security management plays an important, albeit often neglected, role in network management tasks.", "title": "" }, { "docid": "020799a5f143063b843aaf067f52cf29", "text": "In this paper we propose a novel entity annotator for texts which hinges on TagME's algorithmic technology, currently the best one available. The novelty is twofold: from the one hand, we have engineered the software in order to be modular and more efficient; from the other hand, we have improved the annotation pipeline by re-designing all of its three main modules: spotting, disambiguation and pruning. In particular, the re-design has involved the detailed inspection of the performance of these modules by developing new algorithms which have been in turn tested over all publicly available datasets (i.e. AIDA, IITB, MSN, AQUAINT, and the one of the ERD Challenge). This extensive experimentation allowed us to derive the best combination which achieved on the ERD development dataset an F1 score of 74.8%, which turned to be 67.2% F1 for the test dataset. This final result was due to an impressive precision equal to 87.6%, but very low recall 54.5%. With respect to classic TagME on the development dataset the improvement ranged from 1% to 9% on the D2W benchmark, depending on the disambiguation algorithm being used. As a side result, the final software can be interpreted as a flexible library of several parsing/disambiguation and pruning modules that can be used to build up new and more sophisticated entity annotators. We plan to release our library to the public as an open-source project.", "title": "" }, { "docid": "ab8599cbe4b906cea6afab663cbe2caf", "text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.", "title": "" }, { "docid": "f9d91253c5c276bb020daab4a4127822", "text": "Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.", "title": "" }, { "docid": "fb97b11eba38f84f38b473a09119162a", "text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.", "title": "" }, { "docid": "3ef6a2d1c125d5c7edf60e3ceed23317", "text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.", "title": "" }, { "docid": "7ec5faf2081790e7baa1832d5f9ab5bd", "text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.", "title": "" }, { "docid": "ee46ee9e45a87c111eb14397c99cd653", "text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto", "title": "" }, { "docid": "9665328d7993e2b1298a2c849c987979", "text": "The case study presented here, deals with the subject of second language acquisition making at the same time an effort to show as much as possible how L1 was acquired and the ways L1 affected L2, through the process of examining a Greek girl who has been exposed to the English language from the age of eight. Furthermore, I had the chance to analyze the method used by the frontistirio teachers and in what ways this method helps or negatively influences children regarding their performance in the four basic skills. We will evaluate the evidence acquired by the girl by studying briefly the basic theories provided by important figures in the field of L2. Finally, I will also include my personal suggestions and the improvement of the child’s abilities and I will state my opinion clearly.", "title": "" }, { "docid": "819693b9acce3dfbb74694733ab4d10f", "text": "The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.", "title": "" }, { "docid": "a42a19df66ab8827bfcf4c4ee709504d", "text": "We describe the numerical methods required in our approach to multi-dimensional scaling. The rationale of this approach has appeared previously. 1. Introduction We describe a numerical method for multidimensional scaling. In a companion paper [7] we describe the rationale for our approach to scaling, which is related to that of Shepard [9]. As the numerical methods required are largely unfamiliar to psychologists, and even have elements of novelty within the field of numerical analysis, it seems worthwhile to describe them. In [7] we suppose that there are n objects 1, · · · , n, and that we have experimental values 8;; of dissimilarity between them. For a configuration of points x1 , • • • , x .. in t:-dimensional space, with interpoint distances d;; , we defined the stress of the configuration by The stress is intendoo to be a measure of how well the configuration matches the data. More fully, it is supposed that the \"true\" dissimilarities result from some unknown monotone distortion of the interpoint distances of some \"true\" configuration, and that the observed dissimilarities differ from the true dissimilarities only because of random fluctuation. The stress is essentially the root-mean-square residual departure from this hypothesis. By definition, the best-fitting configuration in t-dimensional space, for a fixed value of t, is that configuration which minimizes the stress. The primary computational problem is to find that configuration. A secondary computational problem, of independent interest, is to find the values of", "title": "" }, { "docid": "7e8b58b88a1a139f9eb6642a69eb697a", "text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.", "title": "" } ]
scidocsrr
6788a1ff9e1df4f3a515adc32d05e2be
A REVIEW ON IMAGE SEGMENTATION TECHNIQUES WITH REMOTE SENSING PERSPECTIVE
[ { "docid": "3fa70c2667c6dbe179a7e17e44571727", "text": "A~tract--For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (I) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques, in the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection.", "title": "" }, { "docid": "d984489b4b71eabe39ed79fac9cf27a1", "text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing", "title": "" } ]
[ { "docid": "451434f1181c021eb49442d6eb6617c5", "text": "In this paper, we use variational recurrent neural network to investigate the anomaly detection problem on graph time series. The temporal correlation is modeled by the combination of recurrent neural network (RNN) and variational inference (VI), while the spatial information is captured by the graph convolutional network. In order to incorporate external factors, we use feature extractor to augment the transition of latent variables, which can learn the influence of external factors. With the target function as accumulative ELBO, it is easy to extend this model to on-line method. The experimental study on traffic flow data shows the detection capability of the proposed method.", "title": "" }, { "docid": "9c67b538a5e6806273b26d9c5899ef42", "text": "Back propagation training algorithms have been implemented by many researchers for their own purposes and provided publicly on the internet for others to use in veriication of published results and for reuse in unrelated research projects. Often, the source code of a package is used as the basis for a new package for demonstrating new algorithm variations, or some functionality is added speciically for analysis of results. However, there are rarely any guarantees that the original implementation is faithful to the algorithm it represents, or that its code is bug free or accurate. This report attempts to look at a few implementations and provide a test suite which shows deeciencies in some software available which the average researcher may not be aware of, and may not have the time to discover on their own. This test suite may then be used to test the correctness of new packages.", "title": "" }, { "docid": "082894a8498a5c22af8903ad8ea6399a", "text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.", "title": "" }, { "docid": "80ece123483d6de02c4e621bdb8eb0fc", "text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.", "title": "" }, { "docid": "73cee52ebbb10167f7d32a49d1243af6", "text": "We consider the problem of a robot learning the mechanical properties of objects through physical interaction with the object, and introduce a practical, data-efficient approach for identifying the motion models of these objects. The proposed method utilizes a physics engine, where the robot seeks to identify the inertial and friction parameters of the object by simulating its motion under different values of the parameters and identifying those that result in a simulation which matches the observed real motions. The problem is solved in a Bayesian optimization framework. The same framework is used for both identifying the model of an object online and searching for a policy that would minimize a given cost function according to the identified model. Experimental results both in simulation and using a real robot indicate that the proposed method outperforms state-of-the-art model-free reinforcement learning approaches.", "title": "" }, { "docid": "e64caf71b75ac93f0426b199844f319b", "text": "INTRODUCTION\nVaginismus is mostly unknown among clinicians and women. Vaginismus causes women to have fear, anxiety, and pain with penetration attempts.\n\n\nAIM\nTo present a large cohort of patients based on prior published studies approved by an institutional review board and the Food and Drug Administration using a comprehensive multimodal vaginismus treatment program to treat the physical and psychologic manifestations of women with vaginismus and to record successes, failures, and untoward effects of this treatment approach.\n\n\nMETHODS\nAssessment of vaginismus included a comprehensive pretreatment questionnaire, the Female Sexual Function Index (FSFI), and consultation. All patients signed a detailed informed consent. Treatment consisted of a multimodal approach including intravaginal injections of onabotulinumtoxinA (Botox) and bupivacaine, progressive dilation under conscious sedation, indwelling dilator, follow-up and support with office visits, phone calls, e-mails, dilation logs, and FSFI reports.\n\n\nMAIN OUTCOME MEASURES\nLogs noting dilation progression, pain and anxiety scores, time to achieve intercourse, setbacks, and untoward effects. Post-treatment FSFI scores were compared with preprocedure scores.\n\n\nRESULTS\nOne hundred seventy-one patients (71%) reported having pain-free intercourse at a mean of 5.1 weeks (median = 2.5). Six patients (2.5%) were unable to achieve intercourse within a 1-year period after treatment and 64 patients (26.6%) were lost to follow-up. The change in the overall FSFI score measured at baseline, 3 months, 6 months, and 1 year was statistically significant at the 0.05 level. Three patients developed mild temporary stress incontinence, two patients developed a short period of temporary blurred vision, and one patient developed temporary excessive vaginal dryness. All adverse events resolved by approximately 4 months. One patient required retreatment followed by successful coitus.\n\n\nCONCLUSION\nA multimodal program that treated the physical and psychologic aspects of vaginismus enabled women to achieve pain-free intercourse as noted by patient communications and serial female sexual function studies. Further studies are indicated to better understand the individual components of this multimodal treatment program. Pacik PT, Geletta S. Vaginismus Treatment: Clinical Trials Follow Up 241 Patients. Sex Med 2017;5:e114-e123.", "title": "" }, { "docid": "d9a99642b106ad3f63134916bd75329b", "text": "We extend Convolutional Neural Networks (CNNs) on flat and regular domains (e.g. 2D images) to curved 2D manifolds embedded in 3D Euclidean space that are discretized as irregular surface meshes and widely used to represent geometric data in Computer Vision and Graphics. We define surface convolution on tangent spaces of a surface domain, where the convolution has two desirable properties: 1) the distortion of surface domain signals is locally minimal when being projected to the tangent space, and 2) the translation equi-variance property holds locally, by aligning tangent spaces for neighboring points with the canonical torsion-free parallel transport that preserves tangent space metric. To implement such a convolution, we rely on a parallel N -direction frame field on the surface that minimizes the field variation and therefore is as compatible as possible to and approximates the parallel transport. On the tangent spaces equipped with parallel frames, the computation of surface convolution becomes standard routine. The tangential frames have N rotational symmetry that must be disambiguated, which we resolve by duplicating the surface domain to construct its covering space induced by the parallel frames and grouping the feature maps into N sets accordingly; each surface convolution is computed on the N branches of the cover space with their respective feature maps while the kernel weights are shared. To handle the irregular data points of a discretized surface mesh while being able to share trainable kernel weights, we make the convolution semi-discrete, i.e. the convolution kernels are smooth polynomial functions, and their convolution with discrete surface data points becomes discrete sampling and weighted summation. In addition, pooling and unpooling operations for surface CNNs on a mesh are computed along the mesh hierarchy built through simplification. The presented surface-based CNNs allow us to do effective deep learning on surface meshes using network structures very similar to those for flat and regular domains. In particular, we show that for various tasks, including classification, segmentation and non-rigid registration, surface CNNs using only raw input signals achieve superior performances than other neural network models using sophisticated pre-computed input features, and enable a simple non-rigid human-body registration procedure by regressing to restpose positions directly.", "title": "" }, { "docid": "6b3c8e869651690193e66bc2524c1f56", "text": "Convolutional Neural Networks (CNNs) have been widely used for face recognition and got extraordinary performance with large number of available face images of different people. However, it is hard to get uniform distributed data for all people. In most face datasets, a large proportion of people have few face images. Only a small number of people appear frequently with more face images. These people with more face images have higher impact on the feature learning than others. The imbalanced distribution leads to the difficulty to train a CNN model for feature representation that is general for each person, instead of mainly for the people with large number of face images. To address this challenge, we proposed a center invariant loss which aligns the center of each person to enforce the learned features to have a general representation for all people. The center invariant loss penalizes the difference between each center of classes. With center invariant loss, we can train a robust CNN that treats each class equally regardless the number of class samples. Extensive experiments demonstrate the effectiveness of the proposed approach. We achieve state-of-the-art results on LFW and YTF datasets.", "title": "" }, { "docid": "266625d5f1c658849d34514d5dc9586f", "text": "Hand written digit recognition is highly nonlinear problem. Recognition of handwritten numerals plays an active role in day to day life now days. Office automation, e-governors and many other areas, reading printed or handwritten documents and convert them to digital media is very crucial and time consuming task. So the system should be designed in such a way that it should be capable of reading handwritten numerals and provide appropriate response as humans do. However, handwritten digits are varying from person to person because each one has their own style of writing, means the same digit or character/word written by different writer will be different even in different languages. This paper presents survey on handwritten digit recognition systems with recent techniques, with three well known classifiers namely MLP, SVM and k-NN used for classification. This paper presents comparative analysis that describes recent methods and helps to find future scope.", "title": "" }, { "docid": "94189593d39be7c5e5411482c7b996e3", "text": "In this paper, interval-valued fuzzy planar graphs are defined and several properties are studied. The interval-valued fuzzy graphs are more efficient than fuzzy graphs, since the degree of membership of vertices and edges lie within the interval [0, 1] instead at a point in fuzzy graphs. We also use the term ‘degree of planarity’ to measures the nature of planarity of an interval-valued fuzzy graph. The other relevant terms such as strong edges, interval-valued fuzzy faces, strong interval-valued fuzzy faces are defined here. The interval-valued fuzzy dual graph which is closely associated to the interval-valued fuzzy planar graph is defined. Several properties of interval-valued fuzzy dual graph are also studied. An example of interval-valued fuzzy planar graph is given.", "title": "" }, { "docid": "5441c49359d4446a51cea3f13991a7dc", "text": "Nowadays, smart composite materials embed miniaturized sensors for structural health monitoring (SHM) in order to mitigate the risk of failure due to an overload or to unwanted inhomogeneity resulting from the fabrication process. Optical fiber sensors, and more particularly fiber Bragg grating (FBG) sensors, outperform traditional sensor technologies, as they are lightweight, small in size and offer convenient multiplexing capabilities with remote operation. They have thus been extensively associated to composite materials to study their behavior for further SHM purposes. This paper reviews the main challenges arising from the use of FBGs in composite materials. The focus will be made on issues related to temperature-strain discrimination, demodulation of the amplitude spectrum during and after the curing process as well as connection between the embedded optical fibers and the surroundings. The main strategies developed in each of these three topics will be summarized and compared, demonstrating the large progress that has been made in this field in the past few years.", "title": "" }, { "docid": "a354b6c03cadf539ccd01a247447ebc1", "text": "In the present study, we tested in vitro different parts of 35 plants used by tribals of the Similipal Biosphere Reserve (SBR, Mayurbhanj district, India) for the management of infections. From each plant, three extracts were prepared with different solvents (water, ethanol, and acetone) and tested for antimicrobial (E. coli, S. aureus, C. albicans); anthelmintic (C. elegans); and antiviral (enterovirus 71) bioactivity. In total, 35 plant species belonging to 21 families were recorded from tribes of the SBR and periphery. Of the 35 plants, eight plants (23%) showed broad-spectrum in vitro antimicrobial activity (inhibiting all three test strains), while 12 (34%) exhibited narrow spectrum activity against individual pathogens (seven as anti-staphylococcal and five as anti-candidal). Plants such as Alangium salviifolium, Antidesma bunius, Bauhinia racemosa, Careya arborea, Caseria graveolens, Cleistanthus patulus, Colebrookea oppositifolia, Crotalaria pallida, Croton roxburghii, Holarrhena pubescens, Hypericum gaitii, Macaranga peltata, Protium serratum, Rubus ellipticus, and Suregada multiflora showed strong antibacterial effects, whilst Alstonia scholaris, Butea monosperma, C. arborea, C. pallida, Diospyros malbarica, Gmelina arborea, H. pubescens, M. peltata, P. serratum, Pterospermum acerifolium, R. ellipticus, and S. multiflora demonstrated strong antifungal activity. Plants such as A. salviifolium, A. bunius, Aporosa octandra, Barringtonia acutangula, C. graveolens, C. pallida, C. patulus, G. arborea, H. pubescens, H. gaitii, Lannea coromandelica, M. peltata, Melastoma malabathricum, Millettia extensa, Nyctanthes arbor-tristis, P. serratum, P. acerifolium, R. ellipticus, S. multiflora, Symplocos cochinchinensis, Ventilago maderaspatana, and Wrightia arborea inhibit survival of C. elegans and could be a potential source for anthelmintic activity. Additionally, plants such as A. bunius, C. graveolens, C. patulus, C. oppositifolia, H. gaitii, M. extensa, P. serratum, R. ellipticus, and V. maderaspatana showed anti-enteroviral activity. Most of the plants, whose traditional use as anti-infective agents by the tribals was well supported, show in vitro inhibitory activity against an enterovirus, bacteria (E. coil, S. aureus), a fungus (C. albicans), or a nematode (C. elegans).", "title": "" }, { "docid": "18a317b8470b4006ccea0e436f54cfcd", "text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.", "title": "" }, { "docid": "c839542db0e80ce253a170a386d91bab", "text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).", "title": "" }, { "docid": "f1ebd840092228e48a3ab996287e7afd", "text": "Negative emotions are reliably associated with poorer health (e.g., Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002), but only recently has research begun to acknowledge the important role of positive emotions for our physical health (Fredrickson, 2003). We examine the link between dispositional positive affect and one potential biological pathway between positive emotions and health-proinflammatory cytokines, specifically levels of interleukin-6 (IL-6). We hypothesized that greater trait positive affect would be associated with lower levels of IL-6 in a healthy sample. We found support for this hypothesis across two studies. We also explored the relationship between discrete positive emotions and IL-6 levels, finding that awe, measured in two different ways, was the strongest predictor of lower levels of proinflammatory cytokines. These effects held when controlling for relevant personality and health variables. This work suggests a potential biological pathway between positive emotions and health through proinflammatory cytokines.", "title": "" }, { "docid": "81c59b4a7a59a262f9c270b76ef0f747", "text": "Single-phase power factor correction (PFC) ac-dc converters are widely used in the industry for ac-dc power conversion from single phase ac-mains to an isolated output dc voltage. Typically, for high-power applications, such converters use an ac-dc boost input converter followed by a dc-dc full-bridge converter. A new ac-dc single-stage high-power universal PFC ac input full-bridge, pulse-width modulated converter is proposed in this paper. The converter can operate with an excellent input power factor, continuous input and output currents, and a non-excessive intermediate dc bus voltage and has reduced number of semiconductor devices thus presenting a cost-effective novel solution for such applications. In this paper, the operation of the proposed converter is explained, a steady-state analysis of its operation is performed, and the results of the analysis are used to develop a procedure for its design. The operation of the proposed converter is confirmed with results obtained from an experimental prototype.", "title": "" }, { "docid": "ec681bc427c66adfad79008840ea9b60", "text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.", "title": "" }, { "docid": "293e2cd2647740bb65849fed003eb4ac", "text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.", "title": "" }, { "docid": "dd60c1f0ae3707cbeb24da1137ee327d", "text": "Silicone oils have wide range of applications in personal care products due to their unique properties of high lubricity, non-toxicity, excessive spreading and film formation. They are usually employed in the form of emulsions due to their inert nature. Until now, different conventional emulsification techniques have been developed and applied to prepare silicone oil emulsions. The size and uniformity of emulsions showed important influence on stability of droplets, which further affect the application performance. Therefore, various strategies were developed to improve the stability as well as application performance of silicone oil emulsions. In this review, we highlight different factors influencing the stability of silicone oil emulsions and explain various strategies to overcome the stability problems. In addition, the silicone deposition on the surface of hair substrates and different approaches to increase their deposition are also discussed in detail.", "title": "" }, { "docid": "ff41327bad272a6d80d4daba25b6472f", "text": "The dense very deep submicron (VDSM) system on chips (SoC) face a serious limitation in performance due to reverse scaling of global interconnects. Interconnection techniques which decrease delay, delay variation and ensure signal integrity, play an important role in the growth of the semiconductor industry into future generations. Current-mode low-swing interconnection techniques provide an attractive alternative to conventional full-swing voltage mode signaling in terms of delay, power and noise immunity. In this paper, we present a new current-mode low-swing interconnection technique which reduces the delay and delay variations in global interconnects. Extensive simulations for performance of our circuit under crosstalk, supply voltage, process and temperature variations were performed. The results indicate significant savings in power, reduction in delay and increase in noise immunity compared to other techniques.", "title": "" } ]
scidocsrr
44e8caf0bf93aa5b054500a852704660
Urdu text classification
[ { "docid": "17ec8f66fc6822520e2f22bd035c3ba0", "text": "The paper discusses various phases in Urdu lexicon development from corpus. First the issues related with Urdu orthography such as optional vocalic content, Unicode variations, name recognition, spelling variation etc. have been described, then corpus acquisition, corpus cleaning, tokenization etc has been discussed and finally Urdu lexicon development i.e. POS tags, features, lemmas, phonemic transcription and the format of the lexicon has been discussed .", "title": "" }, { "docid": "61662cfd286c06970243bc13d5eff566", "text": "This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the generalization performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifiers, which rely primarily on empirical evidence, this model explains why and when SVMs perform well for text classification. In particular, it addresses the following questions: Why can support vector machines handle the large feature spaces in text classification effectively? How is this related to the statistical properties of text? What are sufficient conditions for applying SVMs to text-classification problems successfully?", "title": "" } ]
[ { "docid": "396dd0517369d892d249bb64fa410128", "text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148.  Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.", "title": "" }, { "docid": "dcd21065898c9dd108617a3db8dad6a1", "text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.", "title": "" }, { "docid": "c366303728d2a8ee47fe4cbfe67dec24", "text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.", "title": "" }, { "docid": "d1e2948af948822746fcc03bc79d6d2a", "text": "The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets.", "title": "" }, { "docid": "242a2f64fc103af641320c1efe338412", "text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.", "title": "" }, { "docid": "e86247471d4911cb84aa79911547045b", "text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.", "title": "" }, { "docid": "4e2bfd87acf1287f36694634a6111b3f", "text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.", "title": "" }, { "docid": "6f31beb59f3f410f5d44446a4b75247a", "text": "An approach for estimating direction-of-arrival (DoA) based on power output cross-correlation and antenna pattern diversity is proposed for a reactively steerable antenna. An \"estimator condition\" is proposed, from which the most appropriate pattern shape is derived. Computer simulations with directive beam patterns obtained from an electronically steerable parasitic array radiator antenna model are conducted to illustrate the theory and to inspect the method performance with respect to the \"estimator condition\". The simulation results confirm that a good estimation can be expected when suitable directive patterns are chosen. In addition, to verify performance, experiments on estimating DoA are conducted in an anechoic chamber for several angles of arrival and different scenarios of antenna adjustable reactance values. The results show that the proposed method can provide high-precision DoA estimation.", "title": "" }, { "docid": "57602f5e2f64514926ab96551f2b4fb6", "text": "Landscape genetics has seen rapid growth in number of publications since the term was coined in 2003. An extensive literature search from 1998 to 2008 using keywords associated with landscape genetics yielded 655 articles encompassing a vast array of study organisms, study designs and methodology. These publications were screened to identify 174 studies that explicitly incorporated at least one landscape variable with genetic data. We systematically reviewed this set of papers to assess taxonomic and temporal trends in: (i) geographic regions studied; (ii) types of questions addressed; (iii) molecular markers used; (iv) statistical analyses used; and (v) types and nature of spatial data used. Overall, studies have occurred in geographic regions proximal to developed countries and more commonly in terrestrial vs. aquatic habitats. Questions most often focused on effects of barriers and/or landscape variables on gene flow. The most commonly used molecular markers were microsatellites and amplified fragment length polymorphism (AFLPs), with AFLPs used more frequently in plants than animals. Analysis methods were dominated by Mantel and assignment tests. We also assessed differences among journals to evaluate the uniformity of reporting and publication standards. Few studies presented an explicit study design or explicit descriptions of spatial extent. While some landscape variables such as topographic relief affected most species studied, effects were not universal, and some species appeared unaffected by the landscape. Effects of habitat fragmentation were mixed, with some species altering movement paths and others unaffected. Taken together, although some generalities emerged regarding effects of specific landscape variables, results varied, thereby reinforcing the need for species-specific work. We conclude by: highlighting gaps in knowledge and methodology, providing guidelines to authors and reviewers of landscape genetics studies, and suggesting promising future directions of inquiry.", "title": "" }, { "docid": "2c19e34ba53e7eb8631d979c83ee3e55", "text": "This paper is the first attempt to learn the policy of an inquiry dialog system (IDS) by using deep reinforcement learning (DRL). Most IDS frameworks represent dialog states and dialog acts with logical formulae. In order to make learning inquiry dialog policies more effective, we introduce a logical formula embedding framework based on a recursive neural network. The results of experiments to evaluate the effect of 1) the DRL and 2) the logical formula embedding framework show that the combination of the two are as effective or even better than existing rule-based methods for inquiry dialog policies.", "title": "" }, { "docid": "39188ae46f22dd183f356ba78528b720", "text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.", "title": "" }, { "docid": "3a0d2784b1115e82a4aedad074da8c74", "text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "154ab0cbc1dfa3c4bae8a846f800699e", "text": "This paper presents a new strategy for the active disturbance rejection control (ADRC) of a general uncertain system with unknown bounded disturbance based on a nonlinear sliding mode extended state observer (SMESO). Firstly, a nonlinear extended state observer is synthesized using sliding mode technique for a general uncertain system assuming asymptotic stability. Then the convergence characteristics of the estimation error are analyzed by Lyapunov strategy. It revealed that the proposed SMESO is asymptotically stable and accurately estimates the states of the system in addition to estimating the total disturbance. Then, an ADRC is implemented by using a nonlinear state error feedback (NLSEF) controller; that is suggested by J. Han and the proposed SMESO to control and actively reject the total disturbance of a permanent magnet DC (PMDC) motor. These disturbances caused by the unknown exogenous disturbances and the matched uncertainties of the controlled model. The proposed SMESO is compared with the linear extended state observer (LESO). Through digital simulations using MATLAB / SIMULINK, the chattering phenomenon has been reduced dramatically on the control input channel compared to LESO. Finally, the closed-loop system exhibits a high immunity to torque disturbance and quite robustness to matched uncertainties in the system. Keywords—extended state observer; sliding mode; rejection control; tracking differentiator; DC motor; nonlinear state feedback", "title": "" }, { "docid": "055faaaa14959a204ca19a4962f6e822", "text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom", "title": "" }, { "docid": "2757d2ab9c3fbc2eb01385771f297a71", "text": "In this brief, we propose a variable structure based nonlinear missile guidance/autopilot system with highly maneuverable actuators, mainly consisting of thrust vector control and divert control system, for the task of intercepting of a theater ballistic missile. The aim of the present work is to achieve bounded target interception under the mentioned 5 degree-of-freedom (DOF) control such that the distance between the missile and the target will enter the range of triggering the missile's explosion. First, a 3-DOF sliding-mode guidance law of the missile considering external disturbances and zero-effort-miss (ZEM) is designed to minimize the distance between the center of the missile and that of the target. Next, a quaternion-based sliding-mode attitude controller is developed to track the attitude command while coping with variation of missile's inertia and uncertain aerodynamic force/wind gusts. The stability of the overall system and ZEM-phase convergence are analyzed thoroughly via Lyapunov stability theory. Extensive simulation results are obtained to validate the effectiveness of the proposed integrated guidance/autopilot system by use of the 5-DOF inputs.", "title": "" }, { "docid": "bfcd962b099e6e751125ac43646d76cc", "text": "Dear Editor: We read carefully and with great interest the anatomic study performed by Lilyquist et al. They performed an interesting study of the tibiofibular syndesmosis using a 3-dimensional method that can be of help when performing anatomic studies. As the authors report in the study, a controversy exists regarding the anatomic structures of the syndesmosis, and a huge confusion can be observed when reading the related literature. However, anatomic confusion between the inferior transverse ligament and the intermalleolar ligament is present in the manuscript: the intermalleolar ligament is erroneously identified as the “inferior” transverse ligament. The transverse ligament is the name that receives the deep component of the posterior tibiofibular ligament. The posterior tibiofibular ligament is a ligament located in the posterior aspect of the ankle that joins the distal epiphysis of tibia and fibula; it is formed by 2 fascicles, one superficial and one deep. The deep fascicle or transverse ligament is difficult to see from a posterior ankle view, but easily from a plantar view of the tibiofibular syndesmosis (Figure 1). Instead, the intermalleolar ligament is a thickening of the posterior ankle joint capsule, located between the posterior talofibular ligament and the transverse ligament. It originates from the medial facet of the lateral malleolus and directs medially to tibia and talus (Figure 2). The intermalleolar ligament was observed in 100% of the specimens by Golanó et al in contrast with 70% in Lilyquist’s study. On the other hand, structures of the ankle syndesmosis have not been named according to the International Anatomical Terminology (IAT). In 1955, the VI Federative International Congress of Anatomy accorded to eliminate eponyms from the IAT. Because of this measure, the Chaput, Wagstaff, or Volkman tubercles used in the manuscript should be eliminated in order to avoid increasing confusion. Lilyquist et al also defined the tibiofibular syndesmosis as being formed by the anterior inferior tibiofibular ligament, the posterior inferior tibiofibular ligament, the interosseous ligament, and the inferior transverse ligament. The anterior inferior tibiofibular ligament and posterior inferior tibiofibular ligament of the tibiofibular syndesmosis (or inferior tibiofibular joint) should be referred to as the anterior tibiofibular ligament and posterior tibiofibular ligament. The reason why it is not necessary to use “inferior” in its description is that the ligaments of the superior tibiofibular joint are the anterior ligament of the fibular head and the posterior ligament of the fibular head, not the “anterior superior tibiofibular ligament” and “posterior superior tibiofibular ligament.” The ankle syndesmosis is one of the areas of the human body where chronic anatomic errors exist: the transverse ligament (deep component of the posterior tibiofibular ligament), the anterior tibiofibular ligament (“anterior 689614 FAIXXX10.1177/1071100716689614Foot & Ankle InternationalLetter to the Editor letter2017", "title": "" }, { "docid": "3564cf609cf1b9666eaff7edcd12a540", "text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.", "title": "" }, { "docid": "06f27036cd261647c7670bdf854f5fb4", "text": "OBJECTIVE\nTo determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures.\n\n\nMETHOD AND MATERIALS\nFrom each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride.\n\n\nRESULTS\nApplication of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH.\n\n\nCONCLUSION\nConsidering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.", "title": "" } ]
scidocsrr
c99c84bf59c33895d74c2c5fa30f9650
Why and how Java developers break APIs
[ { "docid": "fd22861fbb2661a135f9a421d621ba35", "text": "When APIs evolve, clients make corresponding changes to their applications to utilize new or updated APIs. Despite the benefits of new or updated APIs, developers are often slow to adopt the new APIs. As a first step toward understanding the impact of API evolution on software ecosystems, we conduct an in-depth case study of the co-evolution behavior of Android API and dependent applications using the version history data found in github. Our study confirms that Android is evolving fast at a rate of 115 API updates per month on average. Client adoption, however, is not catching up with the pace of API evolution. About 28% of API references in client applications are outdated with a median lagging time of 16 months. 22% of outdated API usages eventually upgrade to use newer API versions, but the propagation time is about 14 months, much slower than the average API release interval (3 months). Fast evolving APIs are used more by clients than slow evolving APIs but the average time taken to adopt new versions is longer for fast evolving APIs. Further, API usage adaptation code is more defect prone than the one without API usage adaptation. This may indicate that developers avoid API instability.", "title": "" } ]
[ { "docid": "08565176f7a68c27f20756e468663b47", "text": "Speech processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. The objective of automatic speaker recognition is to extract, characterize and recognize the information about speaker identity. Feature extraction is the first step for speaker recognition. Many algorithms are suggested/developed by the researchers for feature extraction. In this work, the Mel Frequency Cepstrum Coefficient (MFCC) feature has been used for designing a text dependent speaker identification system. Some modifications to the existing technique of MFCC for feature extraction are also suggested to improve the speaker recognition efficiency.", "title": "" }, { "docid": "e3027bdccdb21de2cc395af603675702", "text": "Extraction of the lower third molar is one of the most common procedures performed in oral surgery. In general, impacted tooth extraction involves sectioning the tooth’s crown and roots. In order to divide the impacted tooth so that it can be extracted, high-speed air turbine drills are frequently used. However, complications related to air turbine drills may occur. In this report, we propose an alternative tooth sectioning method that obviates the need for air turbine drill use by using a low-speed straight handpiece and carbide bur. A 21-year-old female patient presented to the institute’s dental hospital complaining of symptoms localized to the left lower third molar tooth that were suggestive of impaction. After physical examination, tooth extraction of the impacted left lower third molar was proposed and the patient consented to the procedure. The crown was divided using a conventional straight low-speed handpiece and carbide bur. This carbide bur can easily cut through the enamel of crown. On post-operative day number five, suture was removed and the wound was extremely clear. This technique could minimise intra-operative time and reduce the morbidity associated with air turbine drill assisted lower third molar extraction.", "title": "" }, { "docid": "2794ea63eb1a24ebd1cea052345569eb", "text": "Ethernet is considered as a future communication standard for distributed embedded systems in the automotive and industrial domains. A key challenge is the deterministic low-latency transport of Ethernet frames, as many safety-critical real-time applications in these domains have tight timing requirements. Time-sensitive networking (TSN) is an upcoming set of Ethernet standards, which (among other things) address these requirements by specifying new quality of service mechanisms in the form of different traffic shapers. In this paper, we consider TSN's time-aware and peristaltic shapers and evaluate whether these shapers are able to fulfill these strict timing requirements. We present a formal timing analysis, which is a key requirement for the adoption of Ethernet in safety-critical real-time systems, to derive worst-case latency bounds for each shaper. We use a realistic automotive Ethernet setup to compare these shapers to each other and against Ethernet following IEEE 802.1Q.", "title": "" }, { "docid": "d7e8a55c9d1ad24a82ea25a27ac08076", "text": "We present online learning techniques for statistical machine translation (SMT). The availability of large training data sets that grow constantly over time is becoming more and more frequent in the field of SMT—for example, in the context of translation agencies or the daily translation of government proceedings. When new knowledge is to be incorporated in the SMT models, the use of batch learning techniques require very time-consuming estimation processes over the whole training set that may take days or weeks to be executed. By means of the application of online learning, new training samples can be processed individually in real time. For this purpose, we define a state-of-the-art SMT model composed of a set of submodels, as well as a set of incremental update rules for each of these submodels. To test our techniques, we have studied two well-known SMT applications that can be used in translation agencies: post-editing and interactive machine translation. In both scenarios, the SMT system collaborates with the user to generate high-quality translations. These user-validated translations can be used to extend the SMT models by means of online learning. Empirical results in the two scenarios under consideration show the great impact of frequent updates in the system performance. The time cost of such updates was also measured, comparing the efficiency of a batch learning SMT system with that of an online learning system, showing that online learning is able to work in real time whereas the time cost of batch retraining soon becomes infeasible. Empirical results also showed that the performance of online learning is comparable to that of batch learning. Moreover, the proposed techniques were able to learn from previously estimated models or from scratch. We also propose two new measures to predict the effectiveness of online learning in SMT tasks. The translation system with online learning capabilities presented here is implemented in the open-source Thot toolkit for SMT.", "title": "" }, { "docid": "4b78f107ee628cefaeb80296e4f9ae27", "text": "On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection.\n In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low-diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.\n Our design is implemented in a few hundred lines of X10. On the binomial tree described in olivier:08}, the program achieve 87% efficiency on an Infiniband cluster of 1024 Power7 cores, with a peak throughput of 2.37 GNodes/sec. It achieves 87% efficiency on a Blue Gene/P with 2048 processors, and a peak throughput of 0.966 GNodes/s. All numbers are relative to single core sequential performance. This implementation has been refactored into a reusable global load balancing framework. Applications can use this framework to obtain global load balance with minimal code changes.\n In summary, we claim: (a) the first formulation of UTS that does not involve application level global termination detection, (b) the introduction of lifeline graphs to reduce failed steals (c) the demonstration of simple lifeline graphs based on k-hypercubes, (d) performance with superior efficiency (or the same efficiency but over a wider range) than published results on UTS. In particular, our framework can deliver the same or better performance as an unrestricted random work-stealing implementation, while reducing the number of attempted steals.", "title": "" }, { "docid": "1c5cba8f3533880b19e9ef98a296ef57", "text": "Internal organs are hidden and untouchable, making it difficult for children to learn their size, position, and function. Traditionally, human anatomy (body form) and physiology (body function) are taught using techniques ranging from worksheets to three-dimensional models. We present a new approach called BodyVis, an e-textile shirt that combines biometric sensing and wearable visualizations to reveal otherwise invisible body parts and functions. We describe our 15-month iterative design process including lessons learned through the development of three prototypes using participatory design and two evaluations of the final prototype: a design probe interview with seven elementary school teachers and three single-session deployments in after-school programs. Our findings have implications for the growing area of wearables and tangibles for learning.", "title": "" }, { "docid": "56110c3d5b88b118ad98bfd077f00221", "text": "Advances in mobile robotics have enabled robots that can autonomously operate in human-populated environments. Although primary tasks for such robots might be fetching, delivery, or escorting, they present an untapped potential as information gathering agents that can answer questions for the community of co-inhabitants. In this paper, we seek to better understand requirements for such information gathering robots (InfoBots) from the perspective of the user requesting the information. We present findings from two studies: (i) a user survey conducted in two office buildings and (ii) a 4-day long deployment in one of the buildings, during which inhabitants of the building could ask questions to an InfoBot through a web-based interface. These studies allow us to characterize the types of information that InfoBots can provide for their users.", "title": "" }, { "docid": "de5c439731485929416b0e729f7f79b2", "text": "The feedback dynamics from mosquito to human and back to mosquito involve considerable time delays due to the incubation periods of the parasites. In this paper, taking explicit account of the incubation periods of parasites within the human and the mosquito, we first propose a delayed Ross-Macdonald model. Then we calculate the basic reproduction number R0 and carry out some sensitivity analysis of R0 on the incubation periods, that is, to study the effect of time delays on the basic reproduction number. It is shown that the basic reproduction number is a decreasing function of both time delays. Thus, prolonging the incubation periods in either humans or mosquitos (via medicine or control measures) could reduce the prevalence of infection.", "title": "" }, { "docid": "247eced239dfd8c1631d80a592593471", "text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1", "title": "" }, { "docid": "8bb465b2ec1f751b235992a79c6f7bf1", "text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.", "title": "" }, { "docid": "5f17432d235a991a5544ad794875a919", "text": "We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.", "title": "" }, { "docid": "426826d9ede3c0146840e4ec9190e680", "text": "We propose methods to classify lines of military chat, or posts, which contain items of interest. We evaluated several current text categorization and feature selection methodologies on chat posts. Our chat posts are examples of 'micro-text', or text that is generally very short in length, semi-structured, and characterized by unstructured or informal grammar and language. Although this study focused specifically on tactical updates via chat, we believe the findings are applicable to content of a similar linguistic structure. Completion of this milestone is a significant first step in allowing for more complex categorization and information extraction.", "title": "" }, { "docid": "b27b164a7ff43b8f360167e5f886f18a", "text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.", "title": "" }, { "docid": "3bb48e5bf7cc87d635ab4958553ef153", "text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: malin.sundstrom@hb.se", "title": "" }, { "docid": "764e5c5201217be1aa9e24ce4fa3760a", "text": "Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Please do not copy or distribute without explicit permission of the authors. Abstract Customer defection or churn is a widespread phenomenon that threatens firms across a variety of industries with dramatic financial consequences. To tackle this problem, companies are developing sophisticated churn management strategies. These strategies typically involve two steps – ranking customers based on their estimated propensity to churn, and then offering retention incentives to a subset of customers at the top of the churn ranking. The implicit assumption is that this process would maximize firm's profits by targeting customers who are most likely to churn. However, current marketing research and practice aims at maximizing the correct classification of churners and non-churners. Profit from targeting a customer depends on not only a customer's propensity to churn, but also on her spend or value, her probability of responding to retention offers, as well as the cost of these offers. Overall profit of the firm also depends on the number of customers the firm decides to target for its retention campaign. We propose a predictive model that accounts for all these elements. Our optimization algorithm uses stochastic gradient boosting, a state-of-the-art numerical algorithm based on stage-wise gradient descent. It also determines the optimal number of customers to target. The resulting optimal customer ranking and target size selection leads to, on average, a 115% improvement in profit compared to current methods. Remarkably, the improvement in profit comes along with more prediction errors in terms of which customers will churn. However, the new loss function leads to better predictions where it matters the most for the company's profits. For a company like Verizon Wireless, this translates into a profit increase of at least $28 million from a single retention campaign, without any additional implementation cost.", "title": "" }, { "docid": "1164e5b54ce970b55cf65cca0a1fbcb1", "text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.", "title": "" }, { "docid": "4822a22e8fde11bf99eb67f96a8d2443", "text": "The traditional approach towards human identification such as fingerprints, identity cards, iris recognition etc. lead to the improvised technique for face recognition. This includes enhancement and segmentation of face image, detection of face boundary and facial features, matching of extracted features against the features in a database, and finally recognition of the face. This research proposes a wavelet transformation for preprocessing the face image, extracting edge image, extracting features and finally matching extracted facial features for face recognition. Simulation is done using ORL database that contains PGM images. This research finds application in homeland security where it can increase the robustness of the existing face recognition algorithms.", "title": "" }, { "docid": "171ded161c7d61cfaf4663ba080b0c6a", "text": "Digital advertisements are delivered in the form of static images, animations or videos, with the goal to promote a product, a service or an idea to desktop or mobile users. Thus, the advertiser pays a monetary cost to buy ad-space in a content provider’s medium (e.g., website) to place their advertisement in the consumer’s display. However, is it only the advertiser who pays for the ad delivery? Unlike traditional advertisements in mediums such as newspapers, TV or radio, in the digital world, the end-users are also paying a cost for the advertisement delivery. Whilst the cost on the advertiser’s side is clearly monetary, on the end-user, it includes both quantifiable costs, such as network requests and transferred bytes, and qualitative costs such as privacy loss to the ad ecosystem. In this study, we aim to increase user awareness regarding the hidden costs of digital advertisement in mobile devices, and compare the user and advertiser views. Specifically, we built OpenDAMP, a transparency tool that passively analyzes users’ web traffic and estimates the costs in both sides. We use a year-long dataset of 1270 real mobile users and by juxtaposing the costs of both sides, we identify a clear imbalance: the advertisers pay several times less to deliver ads, than the cost paid by the users to download them. In addition, the majority of users experience a significant privacy loss, through the personalized ad delivery mechanics.", "title": "" }, { "docid": "3ff58e78ac9fe623e53743ad05248a30", "text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.", "title": "" }, { "docid": "e57131739db1ed904cb0032dddd67804", "text": "We present a cross-layer modeling and design approach for multigigabit indoor wireless personal area networks (WPANs) utilizing the unlicensed millimeter (mm) wave spectrum in the 60 GHz band. Our approach accounts for the following two characteristics that sharply distinguish mm wave networking from that at lower carrier frequencies. First, mm wave links are inherently directional: directivity is required to overcome the higher path loss at smaller wavelengths, and it is feasible with compact, low-cost circuit board antenna arrays. Second, indoor mm wave links are highly susceptible to blockage because of the limited ability to diffract around obstacles such as the human body and furniture. We develop a diffraction-based model to determine network link connectivity as a function of the locations of stationary and moving obstacles. For a centralized WPAN controlled by an access point, it is shown that multihop communication, with the introduction of a small number of relay nodes, is effective in maintaining network connectivity in scenarios where single-hop communication would suffer unacceptable outages. The proposed multihop MAC protocol accounts for the fact that every link in the WPAN is highly directional, and is shown, using packet level simulations, to maintain high network utilization with low overhead.", "title": "" } ]
scidocsrr
adfbcfeacce9b78d0ea346b8d9b3fb52
Map-supervised road detection
[ { "docid": "add9821c4680fab8ad8dfacd8ca4236e", "text": "In this paper, we propose to fuse the LIDAR and monocular image in the framework of conditional random field to detect the road robustly in challenging scenarios. LIDAR points are aligned with pixels in image by cross calibration. Then boosted decision tree based classifiers are trained for image and point cloud respectively. The scores of the two kinds of classifiers are treated as the unary potentials of the corresponding pixel nodes of the random field. The fused conditional random field can be solved efficiently with graph cut. Extensive experiments tested on KITTI-Road benchmark show that our method reaches the state-of-the-art.", "title": "" }, { "docid": "fa88e0d0610f60522fc1140b39fc2972", "text": "The majority of current image-based road following algorithms operate, at least in part, by assuming the presence of structural or visual cues unique to the roadway. As a result, these algorithms are poorly suited to the task of tracking unstructured roads typical in desert environments. In this paper, we propose a road following algorithm that operates in a selfsupervised learning regime, allowing it to adapt to changing road conditions while making no assumptions about the general structure or appearance of the road surface. An application of optical flow techniques, paired with one-dimensional template matching, allows identification of regions in the current camera image that closely resemble the learned appearance of the road in the recent past. The algorithm assumes the vehicle lies on the road in order to form templates of the road’s appearance. A dynamic programming variant is then applied to optimize the 1-D template match results while enforcing a constraint on the maximum road curvature expected. Algorithm output images, as well as quantitative results, are presented for three distinct road types encountered in actual driving video acquired in the California Mojave Desert.", "title": "" } ]
[ { "docid": "745bbe075634f40e6c66716a6b877619", "text": "Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy.", "title": "" }, { "docid": "4ba95fbd89f88bdd6277eff955681d65", "text": "In this paper, new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for the future fifth generation (5G) short-range wireless communications applications is presented. This array antenna is proposed and designed with a standard printed circuit board (PCB) process to be suitable for integration with radio-frequency/microwave circuitry. The proposed structure employs four circular shaped DD patch radiator antenna elements fed by a l-to-4 Wilkinson power divider surrounded by an electromagnetic bandgap (EBG) structure. The DD patch shows better radiation and total efficiencies compared with the metallic patch radiator. For further gain improvement, a dielectric layer of a superstrate is applied above the array antenna. The calculated impedance bandwidth of proposed array antenna ranges from 27.1 GHz to 29.5 GHz for reflection coefficient (Sn) less than -1OdB. The proposed design exhibits good stable radiation patterns over the whole frequency band of interest with a total realized gain more than 16 dBi. Due to the remarkable performance of the proposed array, it can be considered as a good candidate for 5G communication applications.", "title": "" }, { "docid": "2e93d2ba94e0c468634bf99be76706bb", "text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.", "title": "" }, { "docid": "853375477bf531499067eedfe64e6e2d", "text": "Each July since 2003, the author has directed summer camps that introduce middle school boys and girls to the basic ideas of computer programming. Prior to 2009, the author used Alice 2.0 to introduce object-based computing. In 2009, the author decided to offer these camps using Scratch, primarily to engage repeat campers but also for variety. This paper provides a detailed overview of this outreach, and documents its success at providing middle school girls with a positive, engaging computing experience. It also discusses the merits of Alice and Scratch for such outreach efforts; and the use of these visually oriented programs by students with disabilities, including blind students.", "title": "" }, { "docid": "8bc221213edc863f8cba6f9f5d9a9be0", "text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.", "title": "" }, { "docid": "3a5d43d86d39966aca2d93d1cf66b13d", "text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.", "title": "" }, { "docid": "33b37422ace8a300d53d4896de6bbb6f", "text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.", "title": "" }, { "docid": "cb0021ec58487e3dabc445f75918c974", "text": "This document includes supplementary material for the semi-supervised approach towards framesemantic parsing for unknown predicates (Das and Smith, 2011). We include the names of the test documents used in the study, plot the results for framesemantic parsing while varying the hyperparameter that is used to determine the number of top frames to be selected from the posterior distribution over each target of a constructed graph and argue why the semi-supervised self-training baseline did not perform well on the task.", "title": "" }, { "docid": "90bf5834a6e78ed946a6c898f1c1905e", "text": "Many grid connected power electronic systems, such as STATCOMs, UPFCs, and distributed generation system interfaces, use a voltage source inverter (VSI) connected to the supply network through a filter. This filter, typically a series inductance, acts to reduce the switching harmonics entering the distribution network. An alternative filter is a LCL network, which can achieve reduced levels of harmonic distortion at lower switching frequencies and with less inductance, and therefore has potential benefits for higher power applications. However, systems incorporating LCL filters require more complex control strategies and are not commonly presented in literature. This paper proposes a robust strategy for regulating the grid current entering a distribution network from a three-phase VSI system connected via a LCL filter. The strategy integrates an outer loop grid current regulator with inner capacitor current regulation to stabilize the system. A synchronous frame PI current regulation strategy is used for the outer grid current control loop. Linear analysis, simulation, and experimental results are used to verify the stability of the control algorithm across a range of operating conditions. Finally, expressions for “harmonic impedance” of the system are derived to study the effects of supply voltage distortion on the harmonic performance of the system.", "title": "" }, { "docid": "c1c9f0a61b8ec92d4904fa0fd84a4073", "text": "This work presents a Brain-Computer Interface (BCI) based on the Steady-State Visual Evoked Potential (SSVEP) that can discriminate four classes once per second. A statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency. Designed according such approach, volunteers were capable to online operate a BCI with hit rates varying from 60% to 100%. Moreover, one of the volunteers could guide a robotic wheelchair through an indoor environment using such BCI. As an additional feature, such BCI incorporates a visual feedback, which is essential for improving the performance of the whole system. All of this aspects allow to use this BCI to command a robotic wheelchair efficiently.", "title": "" }, { "docid": "909d9d1b9054586afc4b303e94acae73", "text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.", "title": "" }, { "docid": "fd29a4adc5eba8025da48eb174bc0817", "text": "Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.", "title": "" }, { "docid": "0a5ae1eb45404d6a42678e955c23116c", "text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.", "title": "" }, { "docid": "c1cdc9bb29660e910ccead445bcc896d", "text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.", "title": "" }, { "docid": "6a6bd93714e6e77a7b9834e8efee943a", "text": "Many information systems involve data about people. In order to reliably associate data with particular individuals, it is necessary that an effective and efficient identification scheme be established and maintained. There is remarkably little in the information technology literature concerning human identification. This paper seeks to overcome that deficiency, by undertaking a survey of human identity and human identification. The techniques discussed include names, codes, knowledge-based and token-based id, and biometrics. The key challenge to management is identified as being to devise a scheme which is practicable and economic, and of sufficiently high integrity to address the risks the organisation confronts in its dealings with people. It is proposed that much greater use be made of schemes which are designed to afford people anonymity, or enable them to use multiple identities or pseudonyms, while at the same time protecting the organisation's own interests. Multi-purpose and inhabitant registration schemes are described, and the recurrence of proposals to implement and extent them is noted. Public policy issues are identified. Of especial concern is the threat to personal privacy that the general-purpose use of an inhabitant registrant scheme represents. It is speculated that, where such schemes are pursued energetically, the reaction may be strong enough to threaten the social fabric.", "title": "" }, { "docid": "cbc6bd586889561cc38696f758ad97d2", "text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.", "title": "" }, { "docid": "0f5511aaed3d6627671a5e9f68df422a", "text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.", "title": "" }, { "docid": "5dcbebce421097f887f43669e1294b6f", "text": "The paper syncretizes the fundamental concept of the Sea Computing model in Internet of Things and the routing protocol of the wireless sensor network, and proposes a new routing protocol CASCR (Context-Awareness in Sea Computing Routing Protocol) for Internet of Things, based on context-awareness which belongs to the key technologies of Internet of Things. Furthermore, the paper describes the details on the protocol in the work flow, data structure and quantitative algorithm and so on. Finally, the simulation is given to analyze the work performance of the protocol CASCR. Theoretical analysis and experiment verify that CASCR has higher energy efficient and longer lifetime than the congeneric protocols. The paper enriches the theoretical foundation and makes some contribution for wireless sensor network transiting to Internet of Things in this research phase.", "title": "" }, { "docid": "c581d1300bf07663fcfd8c704450db09", "text": "This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5fc8afbe7d55af3274d849d1576d3b13", "text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.", "title": "" } ]
scidocsrr
9d9ba5dbd1001814e255be0b16c9393c
Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic Representation Learning
[ { "docid": "2917b7b1453f9e6386d8f47129b605fb", "text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "title": "" }, { "docid": "250d19b0185d69ec74b8f87e112b9570", "text": "In this paper, we investigate the application of recurrent neural network language models (RNNLM) and factored language models (FLM) to the task of language modeling for Code-Switching speech. We present a way to integrate partof-speech tags (POS) and language information (LID) into these models which leads to significant improvements in terms of perplexity. Furthermore, a comparison between RNNLMs and FLMs and a detailed analysis of perplexities on the different backoff levels are performed. Finally, we show that recurrent neural networks and factored language models can be combined using linear interpolation to achieve the best performance. The final combined language model provides 37.8% relative improvement in terms of perplexity on the SEAME development set and a relative improvement of 32.7% on the evaluation set compared to the traditional n-gram language model.", "title": "" } ]
[ { "docid": "b56d144f1cda6378367ea21e9c76a39e", "text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.", "title": "" }, { "docid": "ab7b09f6779017479b12b20035ad2532", "text": "This article presents a 4:1 wide-band balun that won the student design competition for wide-band baluns held during the 2016 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2016) in San Francisco, California. For this contest, sponsored by Technical Committee MTT-17, participants were required to implement and evaluate their own baluns, with the winning entry achieving the widest bandwidth while satisfying the conditions of the competition rules during measurements at IMS2016. Some of the conditions were revised for this year's competition compared with previous competitions as follows.", "title": "" }, { "docid": "cf264a124cc9f68cf64cacb436b64fa3", "text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.", "title": "" }, { "docid": "3c9b28e47b492e329043941f4ff088b7", "text": "The importance of motion in attracting attention is well known. While watching videos, where motion is prevalent, how do we quantify the regions that are motion salient? In this paper, we investigate the role of motion in attention and compare it with the influence of other low-level features like image orientation and intensity. We propose a framework for motion saliency. In particular, we integrate motion vector information with spatial and temporal coherency to generate a motion attention map. The results show that our model achieves good performance in identifying regions that are moving and salient. We also find motion to have greater influence on saliency than other low-level features when watching videos.", "title": "" }, { "docid": "672fa729e41d20bdd396f9de4ead36b3", "text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.", "title": "" }, { "docid": "3ae9da3a27b00fb60f9e8771de7355fe", "text": "In the past decade, graph-based structures have penetrated nearly every aspect of our lives. The detection of anomalies in these networks has become increasingly important, such as in exposing infected endpoints in computer networks or identifying socialbots. In this study, we present a novel unsupervised two-layered meta-classifier that can detect irregular vertices in complex networks solely by utilizing topology-based features. Following the reasoning that a vertex with many improbable links has a higher likelihood of being anomalous, we applied our method on 10 networks of various scales, from a network of several dozen students to online networks with millions of vertices. In every scenario, we succeeded in identifying anomalous vertices with lower false positive rates and higher AUCs compared to other prevalent methods. Moreover, we demonstrated that the presented algorithm is generic, and efficient both in revealing fake users and in disclosing the influential people in social networks.", "title": "" }, { "docid": "ace2fa767a14ee32f596256ebdf9554f", "text": "Computing systems have steadily evolved into more complex, interconnected, heterogeneous entities. Ad-hoc techniques are most often used in designing them. Furthermore, researchers and designers from both academia and industry have focused on vertical approaches to emphasizing the advantages of one specific feature such as fault tolerance, security or performance. Such approaches led to very specialized computing systems and applications. Autonomic systems, as an alternative approach, can control and manage themselves automatically with minimal intervention by users or system administrators. This paper presents an autonomic framework in developing and implementing autonomic computing services and applications. Firstly, it shows how to apply this framework to autonomically manage the security of networks. Then an approach is presented to develop autonomic components from existing legacy components such as software modules/applications or hardware resources (router, processor, server, etc.). Experimental evaluation of the prototype shows that the system can be programmed dynamically to enable the components to operate autonomously.", "title": "" }, { "docid": "18b173283a1eb58170982504bec7484f", "text": "Database forensics is a domain that uses database content and metadata to reveal malicious activities on database systems in an Internet of Things environment. Although the concept of database forensics has been around for a while, the investigation of cybercrime activities and cyber breaches in an Internet of Things environment would benefit from the development of a common investigative standard that unifies the knowledge in the domain. Therefore, this paper proposes common database forensic investigation processes using a design science research approach. The proposed process comprises four phases, namely: 1) identification; 2) artefact collection; 3) artefact analysis; and 4) the documentation and presentation process. It allows the reconciliation of the concepts and terminologies of all common database forensic investigation processes; hence, it facilitates the sharing of knowledge on database forensic investigation among domain newcomers, users, and practitioners.", "title": "" }, { "docid": "97446299cdba049d32fa9c7333de96c5", "text": "Wetlands all over the world have been lost or are threatened in spite of various international agreements and national policies. This is caused by: (1) the public nature of many wetlands products and services; (2) user externalities imposed on other stakeholders; and (3) policy intervention failures that are due to a lack of consistency among government policies in different areas (economics, environment, nature protection, physical planning, etc.). All three causes are related to information failures which in turn can be linked to the complexity and ‘invisibility’ of spatial relationships among groundwater, surface water and wetland vegetation. Integrated wetland research combining social and natural sciences can help in part to solve the information failure to achieve the required consistency across various government policies. An integrated wetland research framework suggests that a combination of economic valuation, integrated modelling, stakeholder analysis, and multi-criteria evaluation can provide complementary insights into sustainable and welfare-optimising wetland management and policy. Subsequently, each of the various www.elsevier.com/locate/ecolecon * Corresponding author. Tel.: +46-8-6739540; fax: +46-8-152464. E-mail address: tore@beijer.kva.se (T. Söderqvist). 0921-8009/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S 0921 -8009 (00 )00164 -6 R.K. Turner et al. / Ecological Economics 35 (2000) 7–23 8 components of such integrated wetland research is reviewed and related to wetland management policy. © 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "0b32bf3a89cf144a8b440156b2b95621", "text": "Today’s Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor’s and actuator’s time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.", "title": "" }, { "docid": "b9bb07dd039c0542a7309f2291732f82", "text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional", "title": "" }, { "docid": "e5f4b8d4e02f68c90fe4b18dfed2719e", "text": "The evolution of modern electronic devices is outpacing the scalability and effectiveness of the tools used to analyze digital evidence recovered from them. Indeed, current digital forensic techniques and tools are unable to handle large datasets in an efficient manner. As a result, the time and effort required to conduct digital forensic investigations are increasing. This paper describes a promising digital forensic visualization framework that displays digital evidence in a simple and intuitive manner, enhancing decision making and facilitating the explanation of phenomena in evidentiary data.", "title": "" }, { "docid": "d41ac7c4301e5efa591f1949327acb38", "text": "During even the most quiescent behavioral periods, the cortex and thalamus express rich spontaneous activity in the form of slow (<1 Hz), synchronous network state transitions. Throughout this so-called slow oscillation, cortical and thalamic neurons fluctuate between periods of intense synaptic activity (Up states) and almost complete silence (Down states). The two decades since the original characterization of the slow oscillation in the cortex and thalamus have seen considerable advances in deciphering the cellular and network mechanisms associated with this pervasive phenomenon. There are, nevertheless, many questions regarding the slow oscillation that await more thorough illumination, particularly the mechanisms by which Up states initiate and terminate, the functional role of the rhythmic activity cycles in unconscious or minimally conscious states, and the precise relation between Up states and the activated states associated with waking behavior. Given the substantial advances in multineuronal recording and imaging methods in both in vivo and in vitro preparations, the time is ripe to take stock of our current understanding of the slow oscillation and pave the way for future investigations of its mechanisms and functions. My aim in this Review is to provide a comprehensive account of the mechanisms and functions of the slow oscillation, and to suggest avenues for further exploration.", "title": "" }, { "docid": "6c4b59e0e8cc42faea528dc1fe7a09ed", "text": "Grounded Theory is a powerful research method for collecting and analysing research data. It was ‘discovered’ by Glaser & Strauss (1967) in the 1960s but is still not widely used or understood by researchers in some industries or PhD students in some science disciplines. This paper demonstrates the steps in the method and describes the difficulties encountered in applying Grounded Theory (GT). A fundamental part of the analysis method in GT is the derivation of codes, concepts and categories. Codes and coding are explained and illustrated in Section 3. Merging the codes to discover emerging concepts is a central part of the GT method and is shown in Section 4. Glaser and Strauss’s constant comparison step is applied and illustrated so that the emerging categories can be seen coming from the concepts and leading to the emergent theory grounded in the data in Section 5. However, the initial applications of the GT method did have difficulties. Problems encountered when using the method are described to inform the reader of the realities of the approach. The data used in the illustrative analysis comes from recent IS/IT Case Study research into configuration management (CM) and the use of commercially available computer products (COTS). Why and how the GT approach was appropriate is explained in Section 6. However, the focus is on reporting GT as a research method rather than the results of the Case Study.", "title": "" }, { "docid": "31cf550d44266e967716560faeb30f2b", "text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.", "title": "" }, { "docid": "f32ed82c3ab67c711f50394eea2b9106", "text": "Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (“what to say”) and surface realization (“how to say”) in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We recast generation as the task of finding the best derivation tree for a set of database records and describe an algorithm for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. Experimental evaluation on several domains achieves results competitive with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.", "title": "" }, { "docid": "5fc3cbcca7aba6f48da7df299de4abe2", "text": "1. We studied the responses of 103 neurons in visual area V4 of anesthetized macaque monkeys to two novel classes of visual stimuli, polar and hyperbolic sinusoidal gratings. We suspected on both theoretical and experimental grounds that these stimuli would be useful for characterizing cells involved in intermediate stages of form analysis. Responses were compared with those obtained with conventional Cartesian sinusoidal gratings. Five independent, quantitative analyses of neural responses were carried out on the entire population of cells. 2. For each cell, responses to the most effective Cartesian, polar, and hyperbolic grating were compared directly. In 18 of 103 cells, the peak response evoked by one stimulus class was significantly different from the peak response evoked by the remaining two classes. Of the remaining 85 cells, 74 had response peaks for the three stimulus classes that were all within a factor of 2 of one another. 3. An information-theoretic analysis of the trial-by-trial responses to each stimulus showed that all but two cells transmitted significant information about the stimulus set as a whole. Comparison of the information transmitted about each stimulus class showed that 23 of 103 cells transmitted a significantly different amount of information about one class than about the remaining two classes. Of the remaining 80 cells, 55 had information transmission rates for the three stimulus classes that were all within a factor of 2 of one another. 4. To identify cells that had orderly tuning profiles in the various stimulus spaces, responses to each stimulus class were fit with a simple Gaussian model. Tuning curves were successfully fit to the data from at least one stimulus class in 98 of 103 cells, and such fits were obtained for at least two classes in 87 cells. Individual neurons showed a wide range of tuning profiles, with response peaks scattered throughout the various stimulus spaces; there were no major differences in the distributions of the widths or positions of tuning curves obtained for the different stimulus classes. 5. Neurons were classified according to their response profiles across the stimulus set with two objective methods, hierarchical cluster analysis and multidimensional scaling. These two analyses produced qualitatively similar results. The most distinct group of cells was highly selective for hyperbolic gratings. The majority of cells fell into one of two groups that were selective for polar gratings: one selective for radial gratings and one selective for concentric or spiral gratings. There was no group whose primary selectivity was for Cartesian gratings. 6. To determine whether cells belonging to identified classes were anatomically clustered, we compared the distribution of classified cells across electrode penetrations with the distribution that would be expected if the cells were distributed randomly. Cells with similar response profiles were often anatomically clustered. 7. A position test was used to determine whether response profiles were sensitive to precise stimulus placement. A subset of Cartesian and non-Cartesian gratings was presented at several positions in and near the receptive field. The test was run on 13 cells from the present study and 28 cells from an earlier study. All cells showed a significant degree of invariance in their selectivity across changes in stimulus position of up to 0.5 classical receptive field diameters. 8. A length and width test was used to determine whether cells preferring non-Cartesian gratings were selective for Cartesian grating length or width. Responses to Cartesian gratings shorter or narrower than the classical receptive field were compared with those obtained with full-field Cartesian and non-Cartesian gratings in 29 cells. Of the four cells that had shown significant preferences for non-Cartesian gratings in the main test, none showed tuning for Cartesian grating length or width that would account for their non-Cartesian res", "title": "" }, { "docid": "68a5192778ae203ea1e31ba4e29b4330", "text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.", "title": "" }, { "docid": "e1dd2a719d3389a11323c5245cd2b938", "text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.", "title": "" }, { "docid": "7b5f5da25db515f5dcc48b2722cf00b4", "text": "The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the generator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.", "title": "" } ]
scidocsrr
f490ccf2586f3c7e56ffe965453675c3
Eclectic domain mixing for effective adaptation in action spaces
[ { "docid": "662c29e37706092cfa604bf57da11e26", "text": "Article history: Available online 8 January 2014", "title": "" }, { "docid": "adb64a513ab5ddd1455d93fc4b9337e6", "text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.", "title": "" } ]
[ { "docid": "619616b551adddc2819d40d63ce4e67d", "text": "Codependency has been defined as an extreme focus on relationships, caused by a stressful family background (J. L. Fischer, L. Spann, & D. W. Crawford, 1991). In this study the authors assessed the relationship of the Spann-Fischer Codependency Scale (J. L. Fischer et al., 1991) and the Potter-Efron Codependency Assessment (L. A. Potter-Efron & P. S. Potter-Efron, 1989) with self-reported chronic family stress and family background. Students (N = 257) completed 2 existing self-report codependency measures and provided family background information. Results indicated that women had higher codependency scores than men on the Spann-Fischer scale. Students with a history of chronic family stress (with an alcoholic, mentally ill, or physically ill parent) had significantly higher codependency scores on both scales. The findings suggest that other types of family stressors, not solely alcoholism, may be predictors of codependency.", "title": "" }, { "docid": "53e6fe645eb83bcc0f86638ee7ce5578", "text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.", "title": "" }, { "docid": "cadc31481c83e7fc413bdfb5d7bfd925", "text": "A hierarchical model of approach and avoidance achievement motivation was proposed and tested in a college classroom. Mastery, performance-approach, and performance-avoidance goals were assessed and their antecedents and consequences examined. Results indicated that mastery goals were grounded in achievement motivation and high competence expectancies; performance-avoidance goals, in fear of failure and low competence expectancies; and performance-approach goals, in ach.ievement motivation, fear of failure, and high competence expectancies. Mastery goals facilitated intrinsic motivation, performance-approach goals enhanced graded performance, and performanceavoidance goals proved inimical to both intrinsic motivation and graded performance. The proposed model represents an integration of classic and contemporary approaches to the study of achievement motivation.", "title": "" }, { "docid": "f047fa049fad96aa43211bef45c375d7", "text": "Graph processing is increasingly used in knowledge economies and in science, in advanced marketing, social networking, bioinformatics, etc. A number of graph-processing systems, including the GPU-enabled Medusa and Totem, have been developed recently. Understanding their performance is key to system selection, tuning, and improvement. Previous performance evaluation studies have been conducted for CPU-based graph-processing systems, such as Graph and GraphX. Unlike them, the performance of GPU-enabled systems is still not thoroughly evaluated and compared. To address this gap, we propose an empirical method for evaluating GPU-enabled graph-processing systems, which includes new performance metrics and a selection of new datasets and algorithms. By selecting 9 diverse graphs and 3 typical graph-processing algorithms, we conduct a comparative performance study of 3 GPU-enabled systems, Medusa, Totem, and MapGraph. We present the first comprehensive evaluation of GPU-enabled systems with results giving insight into raw processing power, performance breakdown into core components, scalability, and the impact on performance of system-specific optimization techniques and of the GPU generation. We present and discuss many findings that would benefit users and developers interested in GPU acceleration for graph processing.", "title": "" }, { "docid": "bd89993bebdbf80b516626881d459333", "text": "Creating a mobile application often requires the developers to create one for Android och one for iOS, the two leading operating systems for mobile devices. The two applications may have the same layout and logic but several components of the user interface (UI) will differ and the applications themselves need to be developed in two different languages. This process is gruesome since it is time consuming to create two applications and it requires two different sets of knowledge. There have been attempts to create techniques, services or frameworks in order to solve this problem but these hybrids have not been able to provide a native feeling of the resulting applications. This thesis has evaluated the newly released framework React Native that can create both iOS and Android applications by compiling the code written in React. The resulting applications can share code and consists of the UI components which are unique for each platform. The thesis focused on Android and tried to replicate an existing Android application in order to measure user experience and performance. The result was surprisingly positive for React Native as some user could not tell the two applications apart and nearly all users did not mind using a React Native application. The performance evaluation measured GPU frequency, CPU load, memory usage and power consumption. Nearly all measurements displayed a performance advantage for the Android application but the differences were not protruding. The overall experience is that React Native a very interesting framework that can simplify the development process for mobile applications to a high degree. As long as the application itself is not too complex, the development is uncomplicated and one is able to create an application in very short time and be compiled to both Android and iOS. First of all I would like to express my deepest gratitude for Valtech who aided me throughout the whole thesis with books, tools and knowledge. They supplied me with two very competent consultants Alexander Lindholm and Tomas Tunström which made it possible for me to bounce off ideas and in the end having a great thesis. Furthermore, a big thanks to the other students at Talangprogrammet who have supported each other and me during this period of time and made it fun even when it was as most tiresome. Furthermore I would like to thank my examiner Erik Berglund at Linköpings university who has guided me these last months and provided with insightful comments regarding the paper. Ultimately I would like to thank my family who have always been there to support me and especially my little brother who is my main motivation in life.", "title": "" }, { "docid": "3ba87a9a84f317ef3fd97c79f86340c1", "text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.", "title": "" }, { "docid": "124f40ccd178e6284cc66b88da98709d", "text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.", "title": "" }, { "docid": "f177b129e4a02fe42084563a469dc47d", "text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.", "title": "" }, { "docid": "016eca10ff7616958ab8f55af71cf5d7", "text": "This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.", "title": "" }, { "docid": "b27e10bd1491cf59daff0b8cd38e60e5", "text": "........................................................................................................................................................ i", "title": "" }, { "docid": "62a1749f03a7f95b25983545b80b6cf7", "text": "To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.", "title": "" }, { "docid": "1bf01c4ffe40365f093ef89af4c3610d", "text": "User behaviour analysis based on traffic log in wireless networks can be beneficial to many fields in real life: not only for commercial purposes, but also for improving network service quality and social management. We cluster users into groups marked by the most frequently visited websites to find their preferences. In this paper, we propose a user behaviour model based on Topic Model from document classification problems. We use the logarithmic TF-IDF (term frequency - inverse document frequency) weighing to form a high-dimensional sparse feature matrix. Then we apply LSA (Latent semantic analysis) to deduce the latent topic distribution and generate a low-dimensional dense feature matrix. K-means++, which is a classic clustering algorithm, is then applied to the dense feature matrix and several interpretable user clusters are found. Moreover, by combining the clustering results with additional demographical information, including age, gender, and financial information, we are able to uncover more realistic implications from the clustering results.", "title": "" }, { "docid": "f8d10e75cef35a7fbf5477d4b0cd1288", "text": "We present the development on an ultra-wideband (UWB) radar system and its signal processing algorithms for detecting human breathing and heartbeat in the paper. The UWB radar system consists of two (Tx and Rx) antennas and one compact CMOS UWB transceiver. Several signal processing techniques are developed for the application. The system has been tested by real measurements.", "title": "" }, { "docid": "f61f67772aa4a54b8c20b76d15d1007a", "text": "The Internet is a great discovery for ordinary citizens correspondence. People with criminal personality have found a method for taking individual data without really meeting them and with minimal danger of being gotten. It is called Phishing. Phishing represents a huge threat to the web based business industry. Not just does it smash the certainty of clients towards online business, additionally causes electronic administration suppliers colossal financial misfortune. Subsequently, it is fundamental to think about phishing. This paper gives mindfulness about Phishing assaults and hostile to phishing apparatuses.", "title": "" }, { "docid": "61406f27199acc5f034c2721d66cda89", "text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.", "title": "" }, { "docid": "8620c228a0a686788b53d9c766b5b6bf", "text": "Projects combining agile methods with CMMI combine adaptability with predictability to better serve large customer needs. The introduction of Scrum at Systematic, a CMMI Level 5 company, doubled productivity and cut defects by 40% compared to waterfall projects in 2006 by focusing on early testing and time to fix builds. Systematic institutionalized Scrum across all projects and used data driven tools like story process efficiency to surface Product Backlog impediments. This allowed them to systematically develop a strategy for a second doubling in productivity. Two teams have achieved a sustainable quadrupling of productivity compared to waterfall projects. We discuss here the strategy to bring the entire company to that level. Our experiences shows that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them to achieve Toyota level performance – 4 times the productivity and 12 times the quality of waterfall teams.", "title": "" }, { "docid": "64e5cad1b64f1412b406adddc98cd421", "text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.", "title": "" }, { "docid": "a53904f277c06e32bd6ad148399443c6", "text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.", "title": "" }, { "docid": "2fa7d2f8a423c5d3ce53db0c964dcc76", "text": "In recent years, archaeal diversity surveys have received increasing attention. Brazil is a country known for its natural diversity and variety of biomes, which makes it an interesting sampling site for such studies. However, archaeal communities in natural and impacted Brazilian environments have only recently been investigated. In this review, based on a search on the PubMed database on the last week of April 2016, we present and discuss the results obtained in the 51 studies retrieved, focusing on archaeal communities in water, sediments, and soils of different Brazilian environments. We concluded that, in spite of its vast territory and biomes, the number of publications focusing on archaeal detection and/or characterization in Brazil is still incipient, indicating that these environments still represent a great potential to be explored.", "title": "" } ]
scidocsrr
2577cdc082a2d03bd66bf2e56128a68b
Making Learning and Web 2.0 Technologies Work for Higher Learning Institutions in Africa
[ { "docid": "b9e7fedbc42f815b35351ec9a0c31b33", "text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ", "title": "" } ]
[ { "docid": "90d33a2476534e542e2722d7dfa26c91", "text": "Despite some notable and rare exceptions and after many years of relatively neglect (particularly in the ‘upper echelons’ of IS research), there appears to be some renewed interest in Information Systems Ethics (ISE). This paper reflects on the development of ISE by assessing the use and development of ethical theory in contemporary IS research with a specific focus on the ‘leading’ IS journals (according to the Association of Information Systems). The focus of this research is to evaluate if previous calls for more theoretically informed work are permeating the ‘upper echelons’ of IS research and if so, how (Walsham 1996; Smith and Hasnas 1999; Bell and Adam 2004). For the purposes of scope, this paper follows on from those previous studies and presents a detailed review of the leading IS publications between 2005to2007 inclusive. After several processes, a total of 32 papers are evaluated. This review highlights that whilst ethical topics are becoming increasingly popular in such influential media, most of the research continues to neglect considerations of ethical theory with preferences for a range of alternative approaches. Finally, this research focuses on some of the papers produced and considers how the use of ethical theory could contribute.", "title": "" }, { "docid": "ed176e79496053f1c4fdee430d1aa7fc", "text": "Event recognition systems rely on knowledge bases of event definitions to infer occurrences of events in time. Using a logical framework for representing and reasoning about events offers direct connections to machine learning, via Inductive Logic Programming (ILP), thus allowing to avoid the tedious and error-prone task of manual knowledge construction. However, learning temporal logical formalisms, which are typically utilized by logic-based event recognition systems is a challenging task, which most ILP systems cannot fully undertake. In addition, event-based data is usually massive and collected at different times and under various circumstances. Ideally, systems that learn from temporal data should be able to operate in an incremental mode, that is, revise prior constructed knowledge in the face of new evidence. In this work we present an incremental method for learning and revising event-based knowledge, in the form of Event Calculus programs. The proposed algorithm relies on abductive–inductive learning and comprises a scalable clause refinement methodology, based on a compressive summarization of clause coverage in a stream of examples. We present an empirical evaluation of our approach on real and synthetic data from activity recognition and city transport applications.", "title": "" }, { "docid": "ab2c4d5317d2e10450513283c21ca6d3", "text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.", "title": "" }, { "docid": "90563706ada80e880b7fcf25489f9b27", "text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.", "title": "" }, { "docid": "1bc33dcf86871e70bd3b7856fd3c3857", "text": "A framework for clustered-dot color halftone watermarking is proposed. Watermark patterns are embedded in the color halftone on per-separation basis. For typical CMYK printing systems, common desktop RGB color scanners are unable to provide the individual colorant halftone separations, which confounds per-separation detection methods. Not only does the K colorant consistently appear in the scanner channels as it absorbs uniformly across the spectrum, but cross-couplings between CMY separations are also observed in the scanner color channels due to unwanted absorptions. We demonstrate that by exploiting spatial frequency and color separability of clustered-dot color halftones, estimates of the individual colorant halftone separations can be obtained from scanned RGB images. These estimates, though not perfect, allow per-separation detection to operate efficiently. The efficacy of this methodology is demonstrated using continuous phase modulation for the embedding of per-separation watermarks.", "title": "" }, { "docid": "0c88535a3696fe9e2c82f8488b577284", "text": "Touch gestures can be a very important aspect when developing mobile applications with enhanced reality. The main purpose of this research was to determine which touch gestures were most frequently used by engineering students when using a simulation of a projectile motion in a mobile AR applica‐ tion. A randomized experimental design was given to students, and the results showed the most commonly used gestures to visualize are: zoom in “pinch open”, zoom out “pinch closed”, move “drag” and spin “rotate”.", "title": "" }, { "docid": "04e9383039f64bf5ef90e59ba451e45f", "text": "The current generation of manufacturing systems relies on monolithic control software which provides real-time guarantees but is hard to adapt and reuse. These qualities are becoming increasingly important for meeting the demands of a global economy. Ongoing research and industrial efforts therefore focus on service-oriented architectures (SOA) to increase the control software’s flexibility while reducing development time, effort and cost. With such encapsulated functionality, system behavior can be expressed in terms of operations on data and the flow of data between operators. In this thesis we consider industrial real-time systems from the perspective of distributed data processing systems. Data processing systems often must be highly flexible, which can be achieved by a declarative specification of system behavior. In such systems, a user expresses the properties of an acceptable solution while the system determines a suitable execution plan that meets these requirements. Applied to the real-time control domain, this means that the user defines an abstract workflow model with global timing constraints from which the system derives an execution plan that takes the underlying system environment into account. The generation of a suitable execution plan often is NP-hard and many data processing systems rely on heuristic solutions to quickly generate high quality plans. We utilize heuristics for finding real-time execution plans. Our evaluation shows that heuristics were successful in finding a feasible execution plan in 99% of the examined test cases. Lastly, data processing systems are engineered for an efficient exchange of data and therefore are usually built around a direct data flow between the operators without a mediating entity in between. Applied to SOA-based automation, the same principle is realized through service choreographies with direct communication between the individual services instead of employing a service orchestrator which manages the invocation of all services participating in a workflow. These three principles outline the main contributions of this thesis: A flexible reconfiguration of SOA-based manufacturing systems with verifiable real-time guarantees, fast heuristics based planning, and a peer-to-peer execution model for SOAs with clear semantics. We demonstrate these principles within a demonstrator that is close to a real-world industrial system.", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "ed282d88b5f329490f390372c502f238", "text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.", "title": "" }, { "docid": "e87617852de3ce25e1955caf1f4c7a21", "text": "Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert s operator CITED BY (354) 1 Gra a, R. F. P. S. O. (2012). Segmenta o de imagens tor cicas de Raio-X (Doctoral dissertation, UNIVERSIDADE DA BEIRA INTERIOR). 2 ZENDI, M., & YILMAZ, A. (2013). DEGISIK BAKIS A ILARINDAN ELDE EDILEN G R NT GRUPLARININ SINIFLANDIRILMASI. Journal of Aeronautics & Space Technolog ies/Havacilik ve Uzay Teknolojileri Derg is i, 6(1). 3 TROFINO, A. F. N. (2014). TRABALHO DE CONCLUS O DE CURSO. 4 Juan Albarrac n, J. (2011). Dise o, an lis is y optimizaci n de un s istema de reconocimiento de im genes basadas en contenido para imagen publicitaria (Doctoral dissertation). 5 Bergues, G., Ames, G., Canali, L., Schurrer, C., & Fles ia, A. G. (2014, June). Detecci n de l neas en im genes con ruido en un entorno de medici n de alta precis i n. In Biennial Congress of Argentina (ARGENCON), 2014 IEEE (pp. 582-587). IEEE. 6 Andrianto, D. S. (2013). Analisa Statistik terhadap perubahan beberapa faktor deteksi kemacetan melalui pemrosesan video beserta peng iriman notifikas i kemacetan. Jurnal Sarjana ITB bidang Teknik Elektro dan Informatika, 2(1). 7 Pier g , M., & Jaskowiec, J. Identyfikacja twarzy z wykorzystaniem Sztucznych Sieci Neuronowych oraz PCA. 8 Nugraha, K. A., Santoso, A. J., & Suselo, T. (2015, July). ALGORITMA BACKPROPAGATION PADA JARINGAN SARAF TIRUAN UNTUK PENGENALAN POLA WAYANG KULIT. In Seminar Nasional Informatika 2008 (Vol. 1, No. 4). 9 Cornet, T. (2012). Formation et D veloppement des Lacs de Titan: Interpr tation G omorpholog ique d'Ontario Lacus et Analogues Terrestres (Doctoral dissertation, Ecole Centrale de Nantes (ECN)(ECN)(ECN)(ECN)). 10 Li, L., Sun, L., Ning , G., & Tan, S. (2014). Automatic Pavement Crack Recognition Based on BP Neural Network. PROMET-Traffic&Transportation, 26(1), 11-22. 11 Quang Hong , N., Khanh Quoc, D., Viet Anh, N., Chien Van, T., ???, & ???. (2015). Rate Allocation for Block-based Compressive Sensing . Journal of Broadcast Eng ineering , 20(3), 398-407. 12 Swillo, S. (2013). Zastosowanie techniki wizyjnej w automatyzacji pomiar w geometrii i podnoszeniu jakosci wyrob w wytwarzanych w przemysle motoryzacyjnym. Prace Naukowe Politechniki Warszawskiej. Mechanika, (257), 3-128. 13 V zina, M. (2014). D veloppement de log iciels de thermographie infrarouge visant am liorer le contr le de la qualit de la pose de l enrob bitumineux. 14 Decourselle, T. (2014). Etude et mod lisation du comportement des gouttelettes de produits phytosanitaires sur les feuilles de vigne par imagerie ultra-rapide et analyse de texture (Doctoral dissertation, Univers it de Bourgogne). 15 Reja, I. D., & Santoso, A. J. (2013). Pengenalan Motif Sarung (Utan Maumere) Menggunakan Deteksi Tepi. Semantik, 3(1). 16 Feng , Y., & Chen, F. (2013). Fast volume measurement algorithm based on image edge detection. Journal of Computer Applications, 6, 064. 17 Krawczuk, A., & Dominczuk, J. (2014). The use of computer image analys is in determining adhesion properties . Applied Computer Science, 10(3), 68-77. 18 Hui, L., Park, M. W., & Brilakis , I. (2014). Automated Brick Counting for Fa ade Construction Progress Estimation. Journal of Computing in Civil Eng ineering , 04014091. 19 Mahmud, S., Mohammed, J., & Muaidi, H. (2014). A Survey of Dig ital Image Processing Techniques in Character Recognition. IJCSNS, 14(3), 65. 20 Yazdanparast, E., Dos Anjos , A., Garcia, D., Loeuillet, C., Shahbazkia, H. R., & Vergnes, B. (2014). INsPECT, an Open-Source and Versatile Software for Automated Quantification of (Leishmania) Intracellular Parasites . 21 Furtado, L. F. F., Trabasso, L. G., Villani, E., & Francisco, A. (2012, December). Temporal filter applied to image sequences acquired by an industrial robot to detect defects in large aluminum surfaces areas. In MECHATRONIKA, 2012 15th International Symposium (pp. 1-6). IEEE. 22 Zhang , X. H., Li, G., Li, C. L., Zhang , H., Zhao, J., & Hou, Z. X. (2015). Stereo Matching Algorithm Based on 2D Delaunay Triangulation. Mathematical Problems in Eng ineering , 501, 137193. 23 Hasan, H. M. Image Based Vehicle Traffic Measurement. 24 Taneja, N. PERFORMANCE EVALUATION OF IMAGE SEGMENTATION TECHNIQUES USED FOR QUALITATIVE ANALYSIS OF MEMBRANE FILTER. 25 Mathur, A., & Mathur, R. (2013). Content Based Image Retrieval by Multi Features us ing Image Blocks. International Journal of Advanced Computer Research, 3(4), 251. 26 Pandey, A., Pant, D., & Gupta, K. K. (2013). A Novel Approach on Color Image Refocusing and Defocusing . International Journal of Computer Applications, 73(3), 13-17. 27 S le, I. (2014). The determination of the twist level of the Chenille yarn using novel image processing methods: Extraction of axial grey-level characteristic and multi-step gradient based thresholding . Dig ital Signal Processing , 29, 78-99. 28 Azzabi, T., Amor, S. B., & Nejim, S. (2014, November). Obstacle detection for Unmanned Surface Vehicle. In Electrical Sciences and Technolog ies in Maghreb (CISTEM), 2014 International Conference on (pp. 1-7). IEEE. 29 Zacharia, K., Elias , E. P., & Varghese, S. M. (2012). Personalised product design using virtual interactive techniques. arXiv preprint arXiv:1202.1808. 30 Kim, J. H., & Lattimer, B. Y. (2015). Real-time probabilis tic class ification of fire and smoke using thermal imagery for intelligent firefighting robot. Fire Safety Journal, 72, 40-49. 31 N ez, J. M. Edge detection for Very High Resolution Satellite Imagery based on Cellular Neural Network. Advances in Pattern Recognition, 55. 32 Capobianco, J., Pallone, G., & Daudet, L. (2012, October). Low Complexity Transient Detection in Audio Coding Using an Image Edge Detection Approach. In Audio Eng ineering Society Convention 133. Audio Eng ineering Society. 33 zt rk, S., & Akdemir, B. (2015). Comparison of Edge Detection Algorithms for Texture Analys is on Glass Production. Procedia-Social and Behavioral Sciences, 195, 2675-2682. 34 Ahmed, A. M., & Elramly, S. Hyperspectral Data Compression Based On Weighted Prediction. 35 Jayas, D. S. A. Manickavasagan, HN Al-Shekaili, G. Thomas, MS Rahman, N. Guizani &. 36 Khashu, S., Vijayanagar, S., Manikantan, K., & Ramachandran, S. (2014, February). Face Recognition using Dual Wavelet Transform and Filter-Transformed Flipping . In Electronics and Communication Systems (ICECS), 2014 International Conference on (pp. 1-7). IEEE. 37 Brown, R. C. (2014). IRIS: Intelligent Roadway Image Segmentation using an Adaptive Reg ion of Interest (Doctoral dissertation, Virg inia Polytechnic Institute and State Univers ity). 38 Huang , L., Zuo, X., Fang , Y., & Yu, X. A Segmentation Algorithm for Remote Sensing Imag ing Based on Edge and Heterogeneity of Objects . 39 Park, J., Kim, Y., & Kim, S. (2015). Landing Site Searching and Selection Algorithm Development Using Vis ion System and Its Application to Quadrotor. Control Systems Technology, IEEE Transactions on, 23(2), 488-503. 40 Sikchi, P., Beknalkar, N., & Rane, S. Real-Time Cartoonization Using Raspberry Pi. 41 Bachmakov, E., Molina, C., & Wynne, R. (2014, March). Image-based spectroscopy for environmental monitoring . In SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring (pp. 90620B-90620B). International Society for Optics and Photonics . 42 Kulyukin, V., & Zaman, T. (2014). Vis ion-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera Alignment Constraints . International Journal of Image Processing (IJIP), 8(5), 355. 43 Sandhu, E. M. S., Mutneja, E. V., & Nishi, E. Image Edge Detection by Using Rule Based Fuzzy Class ifier. 44 Tarwani, K. M., & Bhoyar, K. K. Approaches to Gender Class ification using Facial Images. 45 Kuppili, S. K., & Prasad, P. M. K. (2015). Design of Area Optimized Sobel Edge Detection. In Computational Intelligence in Data Mining-Volume 2 (pp. 647-655). Springer India. 46 Singh, R. K., Shaw, D. K., & Alam, M. J. (2015). Experimental Studies of LSB Watermarking with Different Noise. Procedia Computer Science, 54, 612-620. 47 Xu, Y., Da-qiao, Z., Da-wei, D., Bo, W., & Chao-nan, T. (2014, July). A speed monitoring method in steel pipe of 3PE-coating process based on industrial Charge-coupled Device. In Control Conference (CCC), 2014 33rd Chinese (pp. 2908-2912). IEEE. 48 Yasiran, S. S., Jumaat, A. K., Malek, A. A., Hashim, F. H., Nasrir, N., Hassan, S. N. A. S., ... & Mahmud, R. (1987). Microcalcifications Segmentation using Three Edge Detection Techniques on Mammogram Images. 49 Roslan, N., Reba, M. N. M., Askari, M., & Halim, M. K. A. (2014, February). Linear and non-linear enhancement for sun g lint reduction in advanced very high resolution radiometer (AVHRR) image. In IOP Conference Series : Earth and Environmental Science (Vol. 18, No. 1, p. 012041). IOP Publishing . 50 Gupta, P. K. D., Pattnaik, S., & Nayak, M. (2014). Inter-level Spatial Cloud Compression Algorithm. Defence Science Journal, 64(6), 536-541. 51 Foster, R. (2015). A comparison of machine learning techniques for hand shape recogn", "title": "" }, { "docid": "b2e493de6e09766c4ddbac7de071e547", "text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development", "title": "" }, { "docid": "49f21df66ac901e5f37cff022353ed20", "text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.", "title": "" }, { "docid": "c070020d88fb77f768efa5f5ac2eb343", "text": "This paper provides a critical overview of the theoretical, analytical, and practical questions most prevalent in the study of the structural and the sociolinguistic dimensions of code-switching (CS). In doing so, it reviews a range of empirical studies from around the world. The paper first looks at the linguistic research on the structural features of CS focusing in particular on the code-switching versus borrowing distinction, and the syntactic constraints governing its operation. It then critically reviews sociological, anthropological, and linguistic perspectives dominating the sociolinguistic research on CS over the past three decades. Major empirical studies on the discourse functions of CS are discussed, noting the similarities and differences between socially motivated CS and style-shifting. Finally, directions for future research on CS are discussed, giving particular emphasis to the methodological issue of its applicability to the analysis of bilingual classroom interaction.", "title": "" }, { "docid": "77796f30d8d1604c459fb3f3fe841515", "text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: marc.goetschalckx@isye.gatech.edu (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X", "title": "" }, { "docid": "885a51f55d5dfaad7a0ee0c56a64ada3", "text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "15886d83be78940609c697b30eb73b13", "text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.", "title": "" }, { "docid": "9b7ff8a7dec29de5334f3de8d1a70cc3", "text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.", "title": "" }, { "docid": "1d29f224933954823228c25e5e99980e", "text": "This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
1ccc0bff27f008ea979adef174ec6e93
Authenticated Key Exchange over Bitcoin
[ { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" } ]
[ { "docid": "21031b55206dd330852b8d11e8e6a84a", "text": "To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other stateof-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online.", "title": "" }, { "docid": "d8e7c9b871f542cd40835b131eedb60a", "text": "Attribute-based encryption (ABE) systems allow encrypting to uncertain receivers by means of an access policy specifying the attributes that the intended receivers should possess. ABE promises to deliver fine-grained access control of encrypted data. However, when data are encrypted using an ABE scheme, key management is difficult if there is a large number of users from various backgrounds. In this paper, we elaborate ABE and propose a new versatile cryptosystem referred to as ciphertext-policy hierarchical ABE (CPHABE). In a CP-HABE scheme, the attributes are organized in a matrix and the users having higher-level attributes can delegate their access rights to the users at a lower level. These features enable a CP-HABE system to host a large number of users from different organizations by delegating keys, e.g., enabling efficient data sharing among hierarchically organized large groups. We construct a CP-HABE scheme with short ciphertexts. The scheme is proven secure in the standard model under non-interactive assumptions.", "title": "" }, { "docid": "d8190669434b167500312091d1a4bf30", "text": "Path analysis was used to test the predictive and mediational role of self-efficacy beliefs in mathematical problem solving. Results revealed that math self-efficacy was more predictive of problem solving than was math self-concept, perceived usefulness of mathematics, prior experience with mathematics, or gender (N = 350). Self-efficacy also mediated the effect of gender and prior experience on self-concept, perceived usefulness, and problem solving. Gender and prior experience influenced self-concept, perceived usefulness, and problem solving largely through the mediational role of self-efficacy. Men had higher performance, self-efficacy, and self-concept and lower anxiety, but these differences were due largely to the influence of self-efficacy, for gender had a direct effect only on self-efficacy and a prior experience variable. Results support the hypothesized role of self-efficacy in A. Bandura's (1986) social cognitive theory.", "title": "" }, { "docid": "05bc787d000ecf26c8185b084f8d2498", "text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling", "title": "" }, { "docid": "fa0883f4adf79c65a6c13c992ae08b3f", "text": "Being able to keep the graph scale small while capturing the properties of the original social graph, graph sampling provides an efficient, yet inexpensive solution for social network analysis. The challenge is how to create a small, but representative sample out of the massive social graph with millions or even billions of nodes. Several sampling algorithms have been proposed in previous studies, but there lacks fair evaluation and comparison among them. In this paper, we analyze the state-of art graph sampling algorithms and evaluate their performance on some widely recognized graph properties on directed graphs using large-scale social network datasets. We evaluate not only the commonly used node degree distribution, but also clustering coefficient, which quantifies how well connected are the neighbors of a node in a graph. Through the comparison we have found that none of the algorithms is able to obtain satisfied sampling results in both of these properties, and the performance of each algorithm differs much in different kinds of datasets.", "title": "" }, { "docid": "4f6c7e299b8c7e34778d5c7c10e5a034", "text": "This study presents an online multiparameter estimation scheme for interior permanent magnet motor drives that exploits the switching ripple of finite control set (FCS) model predictive control (MPC). The combinations consist of two, three, and four parameters are analysed for observability at different operating states. Most of the combinations are rank deficient without persistent excitation (PE) of the system, e.g. by signal injection. This study shows that high frequency current ripples by MPC with FCS are sufficient to create PE in the system. This study also analyses parameter coupling in estimation that results in wrong convergence and propose a decoupling technique. The observability conditions for all the combinations are experimentally validated. Finally, a full parameter estimation along with the decoupling technique is tested at different operating conditions.", "title": "" }, { "docid": "5ba721a06c17731458ef1ecb6584b311", "text": "BACKGROUND\nPrimary and tension-free closure of a flap is often required after particular surgical procedures (e.g., guided bone regeneration). Other times, flap advancement may be desired for situations such as root coverage.\n\n\nMETHODS\nThe literature was searched for articles that addressed techniques, limitations, and complications associated with flap advancement. These articles were used as background information. In addition, reference information regarding anatomy was cited as necessary to help describe surgical procedures.\n\n\nRESULTS\nThis article describes techniques to advance mucoperiosteal flaps, which facilitate healing. Methods are presented for a variety of treatment scenarios, ranging from minor to major coronal tissue advancement. Anatomic landmarks are identified that need to be considered during surgery. In addition, management of complications associated with flap advancement is discussed.\n\n\nCONCLUSIONS\nTension-free primary closure is attainable. The technique is dependent on the extent that the flap needs to be advanced.", "title": "" }, { "docid": "bb02c3a2c02cce6325fe542f006dde9c", "text": "In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.", "title": "" }, { "docid": "9098d40a9e16a1bd1ed0a9edd96f3258", "text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.", "title": "" }, { "docid": "50b316a52bdfacd5fe319818d0b22962", "text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.", "title": "" }, { "docid": "ef8292e79b8c9f463281f2a9c5c410ef", "text": "In real-time applications, the computer is often required to service programs in response to external signals, and to guarantee that each such program is completely processed within a specified interval following the occurrence of the initiating signal. Such programs are referred to in this paper as time-critical processes, or TCPs.", "title": "" }, { "docid": "0e9e6c1f21432df9dfac2e7205105d46", "text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.", "title": "" }, { "docid": "e9b8787e5bb1f099e914db890e04dc23", "text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.", "title": "" }, { "docid": "1abcf9480879b3d29072f09d5be8609d", "text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.", "title": "" }, { "docid": "1f6e92bc8239e358e8278d13ced4a0a9", "text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.", "title": "" }, { "docid": "1106cd6413b478fd32d250458a2233c5", "text": "Submitted: Aug 7, 2013; Accepted: Sep 18, 2013; Published: Sep 25, 2013 Abstract: This article reviews the common used forecast error measurements. All error measurements have been joined in the seven groups: absolute forecasting errors, measures based on percentage errors, symmetric errors, measures based on relative errors, scaled errors, relative measures and other error measures. The formulas are presented and drawbacks are discussed for every accuracy measurements. To reduce the impact of outliers, an Integral Normalized Mean Square Error have been proposed. Due to the fact that each error measure has the disadvantages that can lead to inaccurate evaluation of the forecasting results, it is impossible to choose only one measure, the recommendations for selecting the appropriate error measurements are given.", "title": "" }, { "docid": "81fc9abd3e2ad86feff7bd713cff5915", "text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.", "title": "" }, { "docid": "0cd2da131bf78526c890dae72514a8f0", "text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4172a0c101756ea8207b65b0dfbbe8ce", "text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3", "title": "" } ]
scidocsrr
589a982661939792dcd0fc1ff436e0da
vecteurs sphériques et interprétation géométrique des quaternions unitaires
[ { "docid": "b47904279dee1695d67fafcf65b87895", "text": "Some of the confusions concerning quaternions as they are employed in spacecraft attitude work are discussed. The order of quaternion multiplication is discussed in terms of its historical development and its consequences for the quaternion imaginaries. The di erent formulations for the quaternions are also contrasted. It is shown that the three Hamilton imaginaries cannot be interpreted as the basis of the vector space of physical vectors but only as constant numerical column vectors, the autorepresentation of a physical basis.", "title": "" } ]
[ { "docid": "45079629c4bc09cc8680b3d9ac325112", "text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.", "title": "" }, { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" }, { "docid": "928eb797289d2630ff2e701ced782a14", "text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.", "title": "" }, { "docid": "50ec9d25a24e67481a4afc6a9519b83c", "text": "Weakly supervised image segmentation is an important yet challenging task in image processing and pattern recognition fields. It is defined as: in the training stage, semantic labels are only at the image-level, without regard to their specific object/scene location within the image. Given a test image, the goal is to predict the semantics of every pixel/superpixel. In this paper, we propose a new weakly supervised image segmentation model, focusing on learning the semantic associations between superpixel sets (graphlets in this paper). In particular, we first extract graphlets from each image, where a graphlet is a small-sized graph measures the potential of multiple spatially neighboring superpixels (i.e., the probability of these superpixels sharing a common semantic label, such as the sky or the sea). To compare different-sized graphlets and to incorporate image-level labels, a manifold embedding algorithm is designed to transform all graphlets into equal-length feature vectors. Finally, we present a hierarchical Bayesian network to capture the semantic associations between postembedding graphlets, based on which the semantics of each superpixel is inferred accordingly. Experimental results demonstrate that: 1) our approach performs competitively compared with the state-of-the-art approaches on three public data sets and 2) considerable performance enhancement is achieved when using our approach on segmentation-based photo cropping and image categorization.", "title": "" }, { "docid": "e48313fd23a22c96cceb62434b044e43", "text": "It is unclear whether combined leg and arm high-intensity interval training (HIIT) improves fitness and morphological characteristics equal to those of leg-based HIIT programs. The aim of this study was to compare the effects of HIIT using leg-cycling (LC) and arm-cranking (AC) ergometers with an HIIT program using only LC. Effects on aerobic capacity and skeletal muscle were analyzed. Twelve healthy male subjects were assigned into two groups. One performed LC-HIIT (n=7) and the other LC- and AC-HIIT (n=5) twice weekly for 16 weeks. The training programs consisted of eight to 12 sets of >90% VO2 (the oxygen uptake that can be utilized in one minute) peak for 60 seconds with a 60-second active rest period. VO2 peak, watt peak, and heart rate were measured during an LC incremental exercise test. The cross-sectional area (CSA) of trunk and thigh muscles as well as bone-free lean body mass were measured using magnetic resonance imaging and dual-energy X-ray absorptiometry. The watt peak increased from baseline in both the LC (23%±38%; P<0.05) and the LC-AC groups (11%±9%; P<0.05). The CSA of the quadriceps femoris muscles also increased from baseline in both the LC (11%±4%; P<0.05) and the LC-AC groups (5%±5%; P<0.05). In contrast, increases were observed in the CSA of musculus psoas major (9%±11%) and musculus anterolateral abdominal (7%±4%) only in the LC-AC group. These results suggest that a combined LC- and AC-HIIT program improves aerobic capacity and muscle hypertrophy in both leg and trunk muscles.", "title": "" }, { "docid": "49b0ba019f6f968804608aeacec2a959", "text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.", "title": "" }, { "docid": "ac9a8cd0b53ff3f2e9de002fa9a66121", "text": "Life-span developmental psychology involves the study of constancy and change in behavior throughout the life course. One aspect of life-span research has been the advancement of a more general, metatheoretical view on the nature of development. The family of theoretical perspectives associated with this metatheoretical view of life-span developmental psychology includes the recognition of multidirectionality in ontogenetic change, consideration of both age-connected and disconnected developmental factors, a focus on the dynamic and continuous interplay between growth (gain) and decline (loss), emphasis on historical embeddedness and other structural contextual factors, and the study of the range of plasticity in development. Application of the family of perspectives associated with life-span developmental psychology is illustrated for the domain of intellectual development. Two recently emerging perspectives of the family of beliefs are given particular attention. The first proposition is methodological and suggests that plasticity can best be studied with a research strategy called testing-the-limits. The second proposition is theoretical and proffers that any developmental change includes the joint occurrence of gain (growth) and loss (decline) in adaptive capacity. To assess the pattern of positive (gains) and negative (losses) consequences resulting from development, it is necessary to know the criterion demands posed by the individual and the environment during the lifelong process of adaptation.", "title": "" }, { "docid": "64f15815e4c1c94c3dfd448dec865b85", "text": "Modern software systems are typically large and complex, making comprehension of these systems extremely difficult. Experienced programmers comprehend code by seamlessly processing synonyms and other word relations. Thus, we believe that automated comprehension and software tools can be significantly improved by leveraging word relations in software. In this paper, we perform a comparative study of six state of the art, English-based semantic similarity techniques and evaluate their effectiveness on words from the comments and identifiers in software. Our results suggest that applying English-based semantic similarity techniques to software without any customization could be detrimental to the performance of the client software tools. We propose strategies to customize the existing semantic similarity techniques to software, and describe how various program comprehension tools can benefit from word relation information.", "title": "" }, { "docid": "7f51bdc05c4a1bf610f77b629d8602f7", "text": "Special Issue Anthony Vance Brigham Young University anthony@vance.name Bonnie Brinton Anderson Brigham Young University bonnie_anderson@byu.edu C. Brock Kirwan Brigham Young University kirwan@byu.edu Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.", "title": "" }, { "docid": "9b013f0574cc8fd4139a94aa5cf84613", "text": "Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for rewarddesign) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD’s performance. The new method improves UCT’s performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.", "title": "" }, { "docid": "729cb5a59c1458ce6c9ef3fa29ca1d98", "text": "The Simulink/Stateflow toolset is an integrated suite enabling model-based design and has become popular in the automotive and aeronautics industries. We have previously developed a translator called Simtolus from Simulink to the synchronous language Lustre and we build upon that work by encompassing Stateflow as well. Stateflow is problematical for synchronous languages because of its unbounded behaviour so we propose analysis techniques to define a subset of Stateflow for which we can define a synchronous semantics. We go further and define a \"safe\" subset of Stateflow which elides features which are potential sources of errors in Stateflow designs. We give an informal presentation of the Stateflow to Lustre translation process and show how our model-checking tool Lesar can be used to verify some of the semantical checks we have proposed. Finally, we present a small case-study.", "title": "" }, { "docid": "a3772746888956cf78e56084f74df0bf", "text": "Emerging interest of trading companies and hedge funds in mining social web has created new avenues for intelligent systems that make use of public opinion in driving investment decisions. It is well accepted that at high frequency trading, investors are tracking memes rising up in microblogging forums to count for the public behavior as an important feature while making short term investment decisions. We investigate the complex relationship between tweet board literature (like bullishness, volume, agreement etc) with the financial market instruments (like volatility, trading volume and stock prices). We have analyzed Twitter sentiments for more than 4 million tweets between June 2010 and July 2011 for DJIA, NASDAQ-100 and 11 other big cap technological stocks. Our results show high correlation (upto 0.88 for returns) between stock prices and twitter sentiments. Further, using Granger’s Causality Analysis, we have validated that the movement of stock prices and indices are greatly affected in the short term by Twitter discussions. Finally, we have implemented Expert Model Mining System (EMMS) to demonstrate that our forecasted returns give a high value of R-square (0.952) with low Maximum Absolute Percentage Error (MaxAPE) of 1.76% for Dow Jones Industrial Average (DJIA). We introduce a novel way to make use of market monitoring elements derived from public mood to retain a portfolio within limited risk state (highly improved hedging bets) during typical market conditions.", "title": "" }, { "docid": "139a89ce2fcdfb987aa3476d3618b919", "text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.", "title": "" }, { "docid": "1670dda371458257c8f86390b398b3f8", "text": "Latent topic model such as Latent Dirichlet Allocation (LDA) has been designed for text processing and has also demonstrated success in the task of audio related processing. The main idea behind LDA assumes that the words of each document arise from a mixture of topics, each of which is a multinomial distribution over the vocabulary. When applying the original LDA to process continuous data, the wordlike unit need be first generated by vector quantization (VQ). This data discretization usually results in information loss. To overcome this shortage, this paper introduces a new topic model named GaussianLDA for audio retrieval. In the proposed model, we consider continuous emission probability, Gaussian instead of multinomial distribution. This new topic model skips the vector quantization and directly models each topic as a Gaussian distribution over audio features. It avoids discretization by this way and integrates the procedure of clustering. The experiments of audio retrieval demonstrate that GaussianLDA achieves better performance than other compared methods. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "39d3f1a5d40325bdc4bca9ee50241c9e", "text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.", "title": "" }, { "docid": "0aabb07ef22ef59d6573172743c6378b", "text": "Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.", "title": "" }, { "docid": "2b32087daf5c104e60f91ebf19cd744d", "text": "A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.", "title": "" }, { "docid": "c4332dfb8e8117c3deac7d689b8e259b", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" }, { "docid": "ef2738cfced7ef069b13e5b5dca1558b", "text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS", "title": "" } ]
scidocsrr
443fef97c28d08cad56443529380e197
Gust loading factor — past , present and future
[ { "docid": "c49ae120bca82ef0d9e94115ad7107f2", "text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: Tracy.L.Kijewski.1@nd.edu 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556", "title": "" }, { "docid": "a4cc4d7bf07ee576a9b5a5fdddc02024", "text": "Most international codes and standards provide guidelines and procedures for assessing the along-wind effec structures. Despite their common use of the ‘‘gust loading factor’’ ~GLF! approach, sizeable scatter exists among the wind eff predicted by the various codes and standards under similar flow conditions. This paper presents a comprehensive assessment o of this scatter through a comparison of the along-wind loads and their effects on tall buildings recommended by major internation and standards. ASCE 7-98 ~United States !, AS1170.2-89~Australia!, NBC-1995~Canada!, RLB-AIJ-1993 ~Japan!, and Eurocode-1993 ~Europe! are examined in this study. The comparisons consider the definition of wind characteristics, mean wind loads, GLF, eq static wind loads, and attendant wind load effects. It is noted that the scatter in the predicted wind loads and their effects arises from the variations in the definition of wind field characteristics in the respective codes and standards. A detailed example is pre illustrate the overall comparison and to highlight the main findings of this paper. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:6~788! CE Database keywords: Buildings, highrise; Building codes; Wind loads; Dynamics; Wind velocity.", "title": "" }, { "docid": "90b248a3b141fc55eb2e55d274794953", "text": "The aerodynamic admittance function (AAF) has been widely invoked to relate wind pressures on building surfaces to the oncoming wind velocity. In current practice, strip and quasi-steady theories are generally employed in formulating wind effects in the along-wind direction. These theories permit the representation of the wind pressures on building surfaces in terms of the oncoming wind velocity field. Synthesis of the wind velocity field leads to a generalized wind load that employs the AAF. This paper reviews the development of the current AAF in use. It is followed by a new definition of the AAF, which is based on the base bending moment. It is shown that the new AAF is numerically equivalent to the currently used AAF for buildings with linear mode shape and it can be derived experimentally via high frequency base balance. New AAFs for square and rectangular building models were obtained and compared with theoretically derived expressions. Some discrepancies between experimentally and theoretically derived AAFs in the high frequency range were noted.", "title": "" }, { "docid": "71c34b48cd22a0a8bc9b507e05919301", "text": "Under the action of wind, tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. While the alongwind loads have been successfully treated using quasi-steady and strip theories in terms of gust loading factors, the acrosswind and torsional loads cannot be treated in this manner, since these loads cannot be related in a straightforward manner to the fluctuations in the approach flow. Accordingly, most current codes and standards provide little guidance for the acrosswind and torsional response. To fill this gap, a preliminary, interactive database of aerodynamic loads is presented, which can be accessed by any user with Microsoft Explorer at the URL address http://www.nd.edu/;nathaz/. The database is comprised of high-frequency base balance measurements on a host of isolated tall building models. Combined with the analysis procedure provided, the nondimensional aerodynamic loads can be used to compute the wind-induced response of tall buildings. The influence of key parameters, such as the side ratio, aspect ratio, and turbulence characteristics for rectangular sections, is also discussed. The database and analysis procedure are viable candidates for possible inclusion as a design guide in the next generation of codes and standards. DOI: 10.1061/~ASCE!0733-9445~2003!129:3~394! CE Database keywords: Aerodynamics; Wind loads; Wind tunnels; Databases; Random vibration; Buildings, high-rise; Turbulence. 394 / JOURNAL OF STRUCTURAL ENGINEERING © ASCE / MARCH 2003 tic model tests are presently used as routine tools in commercial design practice. However, considering the cost and lead time needed for wind tunnel testing, a simplified procedure would be desirable in the preliminary design stages, allowing early assessment of the structural resistance, evaluation of architectural or structural changes, or assessment of the need for detailed wind tunnel tests. Two kinds of wind tunnel-based procedures have been introduced in some of the existing codes and standards to treat the acrosswind and torsional response. The first is an empirical expression for the wind-induced acceleration, such as that found in the National Building Code of Canada ~NBCC! ~NRCC 1996!, while the second is an aerodynamic-load-based procedure such as those in Australian Standard ~AS 1989! and the Architectural Institute of Japan ~AIJ! Recommendations ~AIJ 1996!. The latter approach offers more flexibility as the aerodynamic load provided can be used to determine the response of any structure having generally the same architectural features and turbulence environment of the tested model, regardless of its structural characteristics. Such flexibility is made possible through the use of well-established wind-induced response analysis procedures. Meanwhile, there are some databases involving isolated, generic building shapes available in the literature ~e.g., Kareem 1988; Choi and Kanda 1993; Marukawa et al. 1992!, which can be expanded using HFBB tests. For example, a number of commercial wind tunnel facilities have accumulated data of actual buildings in their natural surroundings, which may be used to supplement the overall loading database. Though such HFBB data has been collected, it has not been assimilated and made accessible to the worldwide community, to fully realize its potential. Fortunately, the Internet now provides the opportunity to pool and archive the international stores of wind tunnel data. This paper takes the first step in that direction by introducing an interactive database of aerodynamic loads obtained from HFBB measurements on a host of isolated tall building models, accessible to the worldwide Internet community via Microsoft Explorer at the URL address http://www.nd.edu/;nathaz. Through the use of this interactive portal, users can select the Engineer, Malouf Engineering International, Inc., 275 W. Campbell Rd., Suite 611, Richardson, TX 75080; Fomerly, Research Associate, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: yzhou@nd.edu Graduate Student, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: tkijewsk@nd.edu Robert M. Moran Professor, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: kareem@nd.edu. Note. Associate Editor: Bogusz Bienkiewicz. Discussion open until August 1, 2003. Separate discussions must be submitted for individual papers. To extend the closing date by one month, a written request must be filed with the ASCE Managing Editor. The manuscript for this paper was submitted for review and possible publication on April 24, 2001; approved on December 11, 2001. This paper is part of the Journal of Structural Engineering, Vol. 129, No. 3, March 1, 2003. ©ASCE, ISSN 0733-9445/2003/3-394–404/$18.00. Introduction Under the action of wind, typical tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. It has been recognized that for many high-rise buildings the acrosswind and torsional response may exceed the alongwind response in terms of both serviceability and survivability designs ~e.g., Kareem 1985!. Nevertheless, most existing codes and standards provide only procedures for the alongwind response and provide little guidance for the critical acrosswind and torsional responses. This is partially attributed to the fact that the acrosswind and torsional responses, unlike the alongwind, result mainly from the aerodynamic pressure fluctuations in the separated shear layers and wake flow fields, which have prevented, to date, any acceptable direct analytical relation to the oncoming wind velocity fluctuations. Further, higher-order relationships may exist that are beyond the scope of the current discussion ~Gurley et al. 2001!. Wind tunnel measurements have thus served as an effective alternative for determining acrosswind and torsional loads. For example, the high-frequency base balance ~HFBB! and aeroelasgeometry and dimensions of a model building, from the available choices, and specify an urban or suburban condition. Upon doing so, the aerodynamic load spectra for the alongwind, acrosswind, or torsional response is displayed along with a Java interface that permits users to specify a reduced frequency of interest and automatically obtain the corresponding spectral value. When coupled with the concise analysis procedure, discussion, and example provided, the database provides a comprehensive tool for computation of the wind-induced response of tall buildings. Wind-Induced Response Analysis Procedure Using the aerodynamic base bending moment or base torque as the input, the wind-induced response of a building can be computed using random vibration analysis by assuming idealized structural mode shapes, e.g., linear, and considering the special relationship between the aerodynamic moments and the generalized wind loads ~e.g., Tschanz and Davenport 1983; Zhou et al. 2002!. This conventional approach yields only approximate estimates of the mode-generalized torsional moments and potential inaccuracies in the lateral loads if the sway mode shapes of the structure deviate significantly from the linear assumption. As a result, this procedure often requires the additional step of mode shape corrections to adjust the measured spectrum weighted by a linear mode shape to the true mode shape ~Vickery et al. 1985; Boggs and Peterka 1989; Zhou et al. 2002!. However, instead of utilizing conventional generalized wind loads, a base-bendingmoment-based procedure is suggested here for evaluating equivalent static wind loads and response. As discussed in Zhou et al. ~2002!, the influence of nonideal mode shapes is rather negligible for base bending moments, as opposed to other quantities like base shear or generalized wind loads. As a result, base bending moments can be used directly, presenting a computationally efficient scheme, averting the need for mode shape correction and directly accommodating nonideal mode shapes. Application of this procedure for the alongwind response has proven effective in recasting the traditional gust loading factor approach in a new format ~Zhou et al. 1999; Zhou and Kareem 2001!. The procedure can be conveniently adapted to the acrosswind and torsional response ~Boggs and Peterka 1989; Kareem and Zhou 2003!. It should be noted that the response estimation based on the aerodynamic database is not advocated for acrosswind response calculations in situations where the reduced frequency is equal to or slightly less than the Strouhal number ~Simiu and Scanlan 1996; Kijewski et al. 2001!. In such cases, the possibility of negative aerodynamic damping, a manifestation of motion-induced effects, may cause the computed results to be inaccurate ~Kareem 1982!. Assuming a stationary Gaussian process, the expected maximum base bending moment response in the alongwind or acrosswind directions or the base torque response can be expressed in the following form:", "title": "" } ]
[ { "docid": "45a098c09a3803271f218fafd4d951cd", "text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.", "title": "" }, { "docid": "1145d2375414afbdd5f1e6e703638028", "text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).", "title": "" }, { "docid": "469c17aa0db2c70394f081a9a7c09be5", "text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.", "title": "" }, { "docid": "26f76aa41a64622ee8f0eaaed2aac529", "text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.", "title": "" }, { "docid": "d0f4021050e620770f5546171cbfccdc", "text": "This paper presents a compact 10-bit digital-to-analog converter (DAC) for LCD source drivers. The cyclic DAC architecture is used to reduce the area of LCD column drivers when compared to the use of conventional resistor-string DACs. The current interpolation technique is proposed to perform gamma correction after D/A conversion. The gamma correction circuit is shared by four DAC channels using the interleave technique. A prototype 10-bit DAC with gamma correction function is implemented in 0.35 μm CMOS technology and its average die size per channel is 0.053 mm2, which is smaller than those of the R-DACs with gamma correction function. The settling time of the 10-bit DAC is 1 μs, and the maximum INL and DNL are 2.13 least significant bit (LSB) and 1.30 LSB, respectively.", "title": "" }, { "docid": "1cacfd4da5273166debad8a6c1b72754", "text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.", "title": "" }, { "docid": "7148253937ac85f308762f906727d1b5", "text": "Object detection methods like Single Shot Multibox Detector (SSD) provide highly accurate object detection that run in real-time. However, these approaches require a large number of annotated training images. Evidently, not all of these images are equally useful for training the algorithms. Moreover, obtaining annotations in terms of bounding boxes for each image is costly and tedious. In this paper, we aim to obtain a highly accurate object detector using only a fraction of the training images. We do this by adopting active learning that uses ‘human in the loop’ paradigm to select the set of images that would be useful if annotated. Towards this goal, we make the following contributions: 1. We develop a novel active learning method which poses the layered architecture used in object detection as a ‘query by committee’ paradigm to choose the set of images to be queried. 2. We introduce a framework to use the exploration/exploitation trade-off in our methods. 3. We analyze the results on standard object detection datasets which show that with only a third of the training data, we can obtain more than 95% of the localization accuracy of full supervision. Further our methods outperform classical uncertainty-based active learning algorithms like maximum entropy.", "title": "" }, { "docid": "c1ffc050eaee547bd0eb070559ffc067", "text": "This paper proposes a method for designing a sentence set for utterances taking account of prosody. This method is based on a measure of coverage which incorporates two factors: (1) the distribution of voice fundamental frequency and phoneme duration predicted by the prosody generation module of a TTS; (2) perceptual damage to naturalness due to prosody modification. A set of 500 sentences with a predicted coverage of 82.6% was designed by this method, and used to collect a speech corpus. The obtained speech corpus yielded 88% of the predicted coverage. The data size was reduced to 49% in terms of number of sentences (89% in terms of number of phonemes) compared to a general-purpose corpus designed without taking prosody into account.", "title": "" }, { "docid": "3f1b7062e978da9c4f9675b926c502db", "text": "Millimeter-wave reconfigurable antennas are predicted as a future of next generation wireless networks with the availability of wide bandwidth. A coplanar waveguide (CPW) fed T-shaped frequency reconfigurable millimeter-wave antenna for 5G networks is presented. The resonant frequency is varied to obtain the 10dB return loss bandwidth in the frequency range of 23-29GHz by incorporating two variable resistors. The radiation pattern contributes two symmetrical radiation beams at approximately ±30o along the end fire direction. The 3dB beamwidth remains conserved over the entire range of operating bandwidth. The proposed antenna targets the applications of wireless systems operating in narrow passages, corridors, mine tunnels, and person-to-person body centric applications.", "title": "" }, { "docid": "c24427c9c600fa16477f22f64ed27475", "text": "The growing problem of unsolicited bulk e-mail, also known as “spam”, has generated a need for reliable anti-spam e-mail filters. Filters of this type have so far been based mostly on manually constructed keyword patterns. An alternative approach has recently been proposed, whereby a Naive Bayesian classifier is trained automatically to detect spam messages. We test this approach on a large collection of personal e-mail messages, which we make publicly available in “encrypted” form contributing towards standard benchmarks. We introduce appropriate cost-sensitive measures, investigating at the same time the effect of attribute-set size, training-corpus size, lemmatization, and stop lists, issues that have not been explored in previous experiments. Finally, the Naive Bayesian filter is compared, in terms of performance, to a filter that uses keyword patterns, and which is part of a widely used e-mail reader.", "title": "" }, { "docid": "2472a20493c3319cdc87057cc3d70278", "text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.", "title": "" }, { "docid": "fb37da1dc9d95501e08d0a29623acdab", "text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.", "title": "" }, { "docid": "72c917a9f42d04cae9e03a31e0728555", "text": "We extend Fano’s inequality, which controls the average probability of events in terms of the average of some f–divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary [0, 1]–valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning. MSC 2000 subject classifications. Primary-62B10; secondary-62F15, 68T05.", "title": "" }, { "docid": "92e150f30ae9ef371ffdd7160c84719d", "text": "Categorization is a vitally important skill that people use every day. Early theories of category learning assumed a single learning system, but recent evidence suggests that human category learning may depend on many of the major memory systems that have been hypothesized by memory researchers. As different memory systems flourish under different conditions, an understanding of how categorization uses available memory systems will improve our understanding of a basic human skill, lead to better insights into the cognitive changes that result from a variety of neurological disorders, and suggest improvements in training procedures for complex categorization tasks.", "title": "" }, { "docid": "ca20d27b1e6bfd1f827f967473d8bbdd", "text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.", "title": "" }, { "docid": "8eafcf061e2b9cda4cd02de9bf9a31d1", "text": "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.", "title": "" }, { "docid": "f1b99496d9cdbeede7402738c50db135", "text": "Recommender systems base their operation on past user ratings over a collection of items, for instance, books, CDs, etc. Collaborative filtering (CF) is a successful recommendation technique that confronts the ‘‘information overload’’ problem. Memory-based algorithms recommend according to the preferences of nearest neighbors, and model-based algorithms recommend by first developing a model of user ratings. In this paper, we bring to surface factors that affect CF process in order to identify existing false beliefs. In terms of accuracy, by being able to view the ‘‘big picture’’, we propose new approaches that substantially improve the performance of CF algorithms. For instance, we obtain more than 40% increase in precision in comparison to widely-used CF algorithms. In terms of efficiency, we propose a model-based approach based on latent semantic indexing (LSI), that reduces execution times at least 50% than the classic", "title": "" }, { "docid": "7209d813d1a47ac8d2f8f19f4239b8b4", "text": "We conducted two pilot studies to select the appropriate e-commerce website type and contents for the homepage stimuli. The purpose of Pilot Study 1 was to select a website category with which subjects are not familiar, for which they show neither liking nor disliking, but have some interests in browsing. Unfamiliarity with the website was required because familiarity with a certain category of website may influence perceived complexity of (Radocy and Boyle 1988) and liking for the webpage stimuli (Bornstein 1989; Zajonc 2000). We needed a website for which subjects showed neither liking nor disliking so that the manipulation of webpage stimuli in the experiment could be assumed to be the major influence on their reported emotional responses and approach tendencies. To have some degree of interest in browsing the website is necessary for subjects to engage in experiential web-browsing activities with the webpage stimuli. Based on the results of Pilot Study 1, we selected the gifts website as the context for the experimental stimuli. Then, we conducted Pilot Study 2 to identify appropriate gift items to be included in the webpage stimuli. Thirteen gift items, which were shown to elicit neutral affect in the subjects and to be of some interest to the subjects for browsing or purchase, were selected for the website.", "title": "" }, { "docid": "9e6f69cb83422d756909104f2c1c8887", "text": "We introduce a novel method for approximate alignment of point-based surfaces. Our approach is based on detecting a set of salient feature points using a scale-space representation. For each feature point we compute a signature vector that is approximately invariant under rigid transformations. We use the extracted signed feature set in order to obtain approximate alignment of two surfaces. We apply our method for the automatic alignment of multiple scans using both scan-to-scan and scan-to-model matching capabilities.", "title": "" }, { "docid": "cc4458a843a2a6ffa86b4efd1956ffca", "text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters", "title": "" } ]
scidocsrr
401e8b2df6d66df0938f45ec1b580aba
A clickstream-based collaborative filtering personalization model: towards a better performance
[ { "docid": "0dd78cb46f6d2ddc475fd887a0dc687c", "text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.", "title": "" }, { "docid": "ef2ed85c9a25a549aa7082b18242a120", "text": "Markov models have been extensively used to model Web users' navigation behaviors on Web sites. The link structure of a Web site can be seen as a citation network. By applying bibliographic co-citation and coupling analysis to a Markov model constructed from a Web log file on a Web site, we propose a clustering algorithm called CitationCluster to cluster conceptually related pages. The clustering results are used to construct a conceptual hierarchy of the Web site. Markov model based link prediction is integrated with the hierarchy to assist users' navigation on the Web site.", "title": "" } ]
[ { "docid": "d36a69538293e384d64c905c678f4944", "text": "Many studies have investigated factors that affect susceptibility to false memories. However, few have investigated the role of sleep deprivation in the formation of false memories, despite overwhelming evidence that sleep deprivation impairs cognitive function. We examined the relationship between self-reported sleep duration and false memories and the effect of 24 hr of total sleep deprivation on susceptibility to false memories. We found that under certain conditions, sleep deprivation can increase the risk of developing false memories. Specifically, sleep deprivation increased false memories in a misinformation task when participants were sleep deprived during event encoding, but did not have a significant effect when the deprivation occurred after event encoding. These experiments are the first to investigate the effect of sleep deprivation on susceptibility to false memories, which can have dire consequences.", "title": "" }, { "docid": "86ededf9b452bbc51117f5a117247b51", "text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.", "title": "" }, { "docid": "67e85e8b59ec7dc8b0019afa8270e861", "text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.", "title": "" }, { "docid": "74dd6f8fbc0469757d00e95b0aeeab65", "text": "To date, no short scale exists with strong psychometric properties that can assess problematic pornography consumption based on an overarching theoretical background. The goal of the present study was to develop a brief scale, the Problematic Pornography Consumption Scale (PPCS), based on Griffiths's (2005) six-component addiction model that can distinguish between nonproblematic and problematic pornography use. The PPCS was developed using an online sample of 772 respondents (390 females, 382 males; Mage = 22.56, SD = 4.98 years). Creation of items was based on previous problematic pornography use instruments and on the definitions of factors in Griffiths's model. A confirmatory factor analysis (CFA) was carried out-because the scale is based on a well-established theoretical model-leading to an 18-item second-order factor structure. The reliability of the PPCS was excellent, and measurement invariance was established. In the current sample, 3.6% of the users belonged to the at-risk group. Based on sensitivity and specificity analyses, we identified an optimal cutoff to distinguish between problematic and nonproblematic pornography users. The PPCS is a multidimensional scale of problematic pornography use with a strong theoretical basis that also has strong psychometric properties in terms of factor structure and reliability.", "title": "" }, { "docid": "fbc47f2d625755bda6d9aa37805b69f1", "text": "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.", "title": "" }, { "docid": "e38f29a603fb23544ea2fcae04eb1b5d", "text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.", "title": "" }, { "docid": "ad6d21a36cc5500e4d8449525eae25ca", "text": "Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.", "title": "" }, { "docid": "793a1a5ff7b7d2c7fa65ce1eaa65b0c0", "text": "In this paper we describe our implementation of algorithms for face detection and recognition in color images under Matlab. For face detection, we trained a feedforward neural network to perform skin segmentation, followed by the eyes detection, face alignment, lips detection and face delimitation. The eyes were detected by analyzing the chrominance and the angle between neighboring pixels and, then, the results were used to perform face alignment. The lips were detected based on the analysis of the Red color component intensity in the lower face region. Finally, the faces were delimited using the eyes and lips positions. The face recognition involved a classifier that used the standard deviation of the difference between color matrices of the faces to identify the input face. The algorithms were run on Faces 1999 dataset. The proposed method achieved 96.9%, 89% and 94% correct detection rate of face, eyes and lips, respectively. The correctness rate of the face recognition algorithm was 70.7%.", "title": "" }, { "docid": "02bae85905793e75950acbe2adcc6a7b", "text": "The olfactory system is an essential part of human physiology, with a rich evolutionary history. Although humans are less dependent on chemosensory input than are other mammals (Niimura 2009, Hum. Genomics 4:107-118), olfactory function still plays a critical role in health and behavior. The detection of hazards in the environment, generating feelings of pleasure, promoting adequate nutrition, influencing sexuality, and maintenance of mood are described roles of the olfactory system, while other novel functions are being elucidated. A growing body of evidence has implicated a role for olfaction in such diverse physiologic processes as kin recognition and mating (Jacob et al. 2002a, Nat. Genet. 30:175-179; Horth 2007, Genomics 90:159-175; Havlicek and Roberts 2009, Psychoneuroendocrinology 34:497-512), pheromone detection (Jacob et al. 200b, Horm. Behav. 42:274-283; Wyart et al. 2007, J. Neurosci. 27:1261-1265), mother-infant bonding (Doucet et al. 2009, PLoS One 4:e7579), food preferences (Mennella et al. 2001, Pediatrics 107:E88), central nervous system physiology (Welge-Lüssen 2009, B-ENT 5:129-132), and even longevity (Murphy 2009, JAMA 288:2307-2312). The olfactory system, although phylogenetically ancient, has historically received less attention than other special senses, perhaps due to challenges related to its study in humans. In this article, we review the anatomic pathways of olfaction, from peripheral nasal airflow leading to odorant detection, to epithelial recognition of these odorants and related signal transduction, and finally to central processing. Olfactory dysfunction, which can be defined as conductive, sensorineural, or central (typically related to neurodegenerative disorders), is a clinically significant problem, with a high burden on quality of life that is likely to grow in prevalence due to demographic shifts and increased environmental exposures.", "title": "" }, { "docid": "f5a7a4729f9374ee7bee4401475647f9", "text": "In the last decade, deep learning has contributed to advances in a wide range computer vision tasks including texture analysis. This paper explores a new approach for texture segmentation using deep convolutional neural networks, sharing important ideas with classic filter bank based texture segmentation methods. Several methods are developed to train Fully Convolutional Networks to segment textures in various applications. We show in particular that these networks can learn to recognize and segment a type of texture, e.g. wood and grass from texture recognition datasets (no training segmentation). We demonstrate that Fully Convolutional Networks can learn from repetitive patterns to segment a particular texture from a single image or even a part of an image. We take advantage of these findings to develop a method that is evaluated on a series of supervised and unsupervised experiments and improve the state of the art on the Prague texture segmentation datasets.", "title": "" }, { "docid": "c1305b1ccc199126a52c6a2b038e24d1", "text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b6b9e1eaf17f6cdbc9c060e467021811", "text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.", "title": "" }, { "docid": "b46885c79ece056211faeaa23cbb5c20", "text": "We have been developing the Network Incident analysis Center for Tactical Emergency Response (nicter), whose objective is to detect and identify propagating malwares. The nicter mainly monitors darknet, a set of unused IP addresses, to observe global trends of network threats, while it captures and analyzes malware executables. By correlating the network threats with analysis results of malware, the nicter identifies the root causes (malwares) of the detected network threats. Through a long-term operation of the nicter for more than five years, we have achieved some key findings that would help us to understand the intentions of attackers and the comprehensive threat landscape of the Internet. With a focus on a well-knwon malware, i. e., W32.Downadup, this paper provides some practical case studies with considerations and consequently we could obtain a threat landscape that more than 60% of attacking hosts observed in our dark-net could be infected by W32.Downadup. As an evaluation, we confirmed that the result of the correlation analysis was correct in a rate of 86.18%.", "title": "" }, { "docid": "88130a65e625f85e527d63a0d2a446d4", "text": "Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.", "title": "" }, { "docid": "11a9d7a218d1293878522252e1f62778", "text": "This paper presents a wideband circularly polarized millimeter-wave (mmw) antenna design. We introduce a novel 3-D-printed polarizer, which consists of several air and dielectric slabs to transform the polarization of the antenna radiation from linear to circular. The proposed polarizer is placed above a radiating aperture operating at the center frequency of 60 GHz. An electric field, <inline-formula> <tex-math notation=\"LaTeX\">${E}$ </tex-math></inline-formula>, radiated from the aperture generates two components of electric fields, <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula>. After passing through the polarizer, both <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> fields can be degenerated with an orthogonal phase difference which results in having a wide axial ratio bandwidth. The phase difference between <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> is determined by the incident angle <inline-formula> <tex-math notation=\"LaTeX\">$\\phi $ </tex-math></inline-formula>, of the polarization of the electric field to the polarizer as well as the thickness, <inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula>, of the dielectric slabs. With the help of the thickness of the polarizer, the directivity of the radiation pattern is increased so as to devote high-gain and wideband characteristics to the antenna. To verify our concept, an intensive parametric study and an experiment were carried out. Three antenna sources, including dipole, patch, and aperture antennas, were investigated with the proposed 3-D-printed polarizer. All measured results agree with the theoretical analysis. The proposed antenna with the polarizer achieves a wide impedance bandwidth of 50% from 45 to 75 GHz for the reflection coefficient less than or equal −10 dB, and yields an overlapped axial ratio bandwidth of 30% from 49 to 67 GHz for the axial ratio ≤ 3 dB. The maximum gain of the antenna reaches to 15 dBic. The proposed methodology of this design can apply to applications related to mmw wireless communication systems. The ultimate goal of this paper is to develop a wideband, high-gain, and low-cost antenna for the mmw frequency band.", "title": "" }, { "docid": "39b7ab83a6a0d75b1ec28c5ff485b98d", "text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.", "title": "" }, { "docid": "0b6ce2e4f3ef7f747f38068adef3da54", "text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.", "title": "" }, { "docid": "7d7d8d521cc098a7672cbe2e387dde58", "text": "AIM\nThe purpose of this review is to represent acids that can be used as surface etchant before adhesive luting of ceramic restorations, placement of orthodontic brackets or repair of chipped porcelain restorations. Chemical reactions, application protocol, and etching effect are presented as well.\n\n\nSTUDY SELECTION\nAvailable scientific articles published in PubMed and Scopus literature databases, scientific reports and manufacturers' instructions and product information from internet websites, written in English, using following search terms: \"acid etching, ceramic surface treatment, hydrofluoric acid, acidulated phosphate fluoride, ammonium hydrogen bifluoride\", have been reviewed.\n\n\nRESULTS\nThere are several acids with fluoride ion in their composition that can be used as ceramic surface etchants. The etching effect depends on the acid type and its concentration, etching time, as well as ceramic type. The most effective etching pattern is achieved when using hydrofluoric acid; the numerous micropores and channels of different sizes, honeycomb-like appearance, extruded crystals or scattered irregular ceramic particles, depending on the ceramic type, have been detected on the etched surfaces.\n\n\nCONCLUSION\nAcid etching of the bonding surface of glass - ceramic restorations is considered as the most effective treatment method that provides a reliable bond with composite cement. Selective removing of the glassy matrix of silicate ceramics results in a micromorphological three-dimensional porous surface that allows micromechanical interlocking of the luting composite.", "title": "" }, { "docid": "efe8e9759d3132e2a012098d41a05580", "text": "A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary di erence between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We brie y describe some experimental applications of T-R programs in the control of simulated and actual mobile robots.", "title": "" }, { "docid": "1be58e70089b58ca3883425d1a46b031", "text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.", "title": "" } ]
scidocsrr
5a812bb56de310cac6c24d113eaa568c
Secure control against replay attacks
[ { "docid": "42d5712d781140edbc6a35703d786e15", "text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance", "title": "" }, { "docid": "223a7496c24dcf121408ac3bba3ad4e5", "text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.", "title": "" }, { "docid": "f2db57e59a2e7a91a0dff36487be3aa4", "text": "In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks.", "title": "" } ]
[ { "docid": "ffea50948eab00d47f603d24bcfc1bfd", "text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.", "title": "" }, { "docid": "d038c7b29701654f8ee908aad395fe8c", "text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.", "title": "" }, { "docid": "caab24c7af0c58965833d56382132d66", "text": "Mesh slicing is one of the most common operations in additive manufacturing (AM). However, the computing burden for such an application is usually very heavy, especially when dealing with large models. Nowadays the graphics processing units (GPU) have abundant resources and it is reasonable to utilize the computing power of GPU for mesh slicing. In the paper, we propose a parallel implementation of the slicing algorithm using GPU. We test the GPU-accelerated slicer on several models and obtain a speedup factor of about 30 when dealing with large models, compared with the CPU implementation. Results show the power of GPU on the mesh slicing problem. In the future, we will extend our work and standardize the slicing process.", "title": "" }, { "docid": "e3b1e52066d20e7c92e936cdb72cc32b", "text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.", "title": "" }, { "docid": "94076bd2a4587df2bee9d09e81af2109", "text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.", "title": "" }, { "docid": "3e2c79715d8ae80e952d1aabf03db540", "text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].", "title": "" }, { "docid": "b53f2f922661bfb14bf2181236fad566", "text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains", "title": "" }, { "docid": "c36986dd83276fe01e73a4125f99f7c0", "text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.", "title": "" }, { "docid": "b9dfc489ff1bf6907929a450ea614d0b", "text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.", "title": "" }, { "docid": "16816ba52854f6242701b27fdd0263fe", "text": "The economic viability of colonizing Mars is examined. It is shown, that of all bodies in the solar system other than Earth, Mars is unique in that it has the resources required to support a population of sufficient size to create locally a new branch of human civilization. It is also shown that while Mars may lack any cash material directly exportable to Earth, Mars’ orbital elements and other physical parameters gives a unique positional advantage that will allow it to act as a keystone supporting extractive activities in the asteroid belt and elsewhere in the solar system. The potential of relatively near-term types of interplanetary transportation systems is examined, and it is shown that with very modest advances on a historical scale, systems can be put in place that will allow individuals and families to emigrate to Mars at their own discretion. Their motives for doing so will parallel in many ways the historical motives for Europeans and others to come to America, including higher pay rates in a labor-short economy, escape from tradition and oppression, as well as freedom to exercise their drive to create in an untamed and undefined world. Under conditions of such large scale immigration, sale of real-estate will add a significant source of income to the planet’s economy. Potential increases in real-estate values after terraforming will provide a sufficient financial incentive to do so. In analogy to frontier America, social conditions on Mars will make it a pressure cooker for invention. These inventions, licensed on Earth, will raise both Terrestrial and Martian living standards and contribute large amounts of income to support the development of the colony.", "title": "" }, { "docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb", "text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.", "title": "" }, { "docid": "6f3a902ed5871a95f6b5adf197684748", "text": "BACKGROUND\nThe choice of antimicrobials for initial treatment of peritoneal dialysis (PD)-related peritonitis is crucial for a favorable outcome. There is no consensus about the best therapy; few prospective controlled studies have been published, and the only published systematic reviews did not report superiority of any class of antimicrobials. The objective of this review was to analyze the results of PD peritonitis treatment in adult patients by employing a new methodology, the proportional meta-analysis.\n\n\nMETHODS\nA review of the literature was conducted. There was no language restriction. Studies were obtained from MEDLINE, EMBASE, and LILACS. The inclusion criteria were: (a) case series and RCTs with the number of reported patients in each study greater than five, (b) use of any antibiotic therapy for initial treatment (e.g., cefazolin plus gentamicin or vancomycin plus gentamicin), for Gram-positive (e.g., vancomycin or a first generation cephalosporin), or for Gram-negative rods (e.g., gentamicin, ceftazidime, and fluoroquinolone), (c) patients with PD-related peritonitis, and (d) studies specifying the rates of resolution. A proportional meta-analysis was performed on outcomes using a random-effects model, and the pooled resolution rates were calculated.\n\n\nRESULTS\nA total of 64 studies (32 for initial treatment and negative culture, 28 reporting treatment for Gram-positive rods and 24 reporting treatment for Gram-negative rods) and 21 RCTs met all inclusion criteria (14 for initial treatment and negative culture, 8 reporting treatment for Gram-positive rods and 8 reporting treatment for Gram-negative rods). The pooled resolution rate of ceftazidime plus glycopeptide as initial treatment (pooled proportion = 86% [95% CI 0.82-0.89]) was significantly higher than first generation cephalosporin plus aminoglycosides (pooled proportion = 66% [95% CI 0.57-0.75]) and significantly higher than glycopeptides plus aminoglycosides (pooled proportion = 75% [95% CI 0.69-0.80]. Other comparisons of regimens used for either initial treatment, treatment for Gram-positive rods or Gram-negative rods did not show statistically significant differences.\n\n\nCONCLUSION\nWe showed that the association of a glycopeptide plus ceftazidime is superior to other regimens for initial treatment of PD peritonitis. This result should be carefully analyzed and does not exclude the necessity of monitoring the local microbiologic profile in each dialysis center to choice the initial therapeutic protocol.", "title": "" }, { "docid": "26a6ba8cba43ddfd3cac0c90750bf4ad", "text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.", "title": "" }, { "docid": "b7f15089db3f5d04c1ce1d5f09b0b1f0", "text": "Despite the flourishing research on the relationships between affect and language, the characteristics of pain-related words, a specific type of negative words, have never been systematically investigated from a psycholinguistic and emotional perspective, despite their psychological relevance. This study offers psycholinguistic, affective, and pain-related norms for words expressing physical and social pain. This may provide a useful tool for the selection of stimulus materials in future studies on negative emotions and/or pain. We explored the relationships between psycholinguistic, affective, and pain-related properties of 512 Italian words (nouns, adjectives, and verbs) conveying physical and social pain by asking 1020 Italian participants to provide ratings of Familiarity, Age of Acquisition, Imageability, Concreteness, Context Availability, Valence, Arousal, Pain-Relatedness, Intensity, and Unpleasantness. We also collected data concerning Length, Written Frequency (Subtlex-IT), N-Size, Orthographic Levenshtein Distance 20, Neighbor Mean Frequency, and Neighbor Maximum Frequency of each word. Interestingly, the words expressing social pain were rated as more negative, arousing, pain-related, and conveying more intense and unpleasant experiences than the words conveying physical pain.", "title": "" }, { "docid": "0f87fefbe2cfc9893b6fc490dd3d40b7", "text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.", "title": "" }, { "docid": "031e3f1ae2537b603b4b2119f3dad572", "text": "Efficient storage and querying of large repositories of RDF content is important due to the widespread growth of Semantic Web and Linked Open Data initiatives. Many novel database systems that store RDF in its native form or within traditional relational storage have demonstrated their ability to scale to large volumes of RDF content. However, it is increasingly becoming obvious that the simple dyadic relationship captured through traditional triples alone is not sufficient for modelling multi-entity relationships, provenance of facts, etc. Such richer models are supported in RDF through two techniques - first, called reification which retains the triple nature of RDF and the second, a non-standard extension called N-Quads. In this paper, we explore the challenges of supporting such richer semantic data by extending the state-of-the-art RDF-3X system. We describe our implementation of RQ-RDF-3X, a reification and quad enhanced RDF-3X, which involved a significant re-engineering ranging from the set of indexes and their compression schemes to the query processing pipeline for queries over reified content. Using large RDF repositories such as YAGO2S and DBpedia, and a set of SPARQL queries that utilize reification model, we demonstrate that RQ-RDF-3X is significantly faster than RDF-3X.", "title": "" }, { "docid": "cea9c1bab28363fc6f225b7843b8df99", "text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct", "title": "" }, { "docid": "2805c831c044d0b4da29bf96e9e7d3ad", "text": "Combined chromatin immunoprecipitation and next-generation sequencing (ChIP-seq) has enabled genome-wide epigenetic profiling of numerous cell lines and tissue types. A major limitation of ChIP-seq, however, is the large number of cells required to generate high-quality data sets, precluding the study of rare cell populations. Here, we present an ultra-low-input micrococcal nuclease-based native ChIP (ULI-NChIP) and sequencing method to generate genome-wide histone mark profiles with high resolution from as few as 10(3) cells. We demonstrate that ULI-NChIP-seq generates high-quality maps of covalent histone marks from 10(3) to 10(6) embryonic stem cells. Subsequently, we show that ULI-NChIP-seq H3K27me3 profiles generated from E13.5 primordial germ cells isolated from single male and female embryos show high similarity to recent data sets generated using 50-180 × more material. Finally, we identify sexually dimorphic H3K27me3 enrichment at specific genic promoters, thereby illustrating the utility of this method for generating high-quality and -complexity libraries from rare cell populations.", "title": "" }, { "docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a", "text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.", "title": "" } ]
scidocsrr
42030177956935b1186f5c1568db71de
A Quick Startup Technique for High- $Q$ Oscillators Using Precisely Timed Energy Injection
[ { "docid": "b20aa52ea2e49624730f6481a99a8af8", "text": "A 51.3-MHz 18-<inline-formula><tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> 21.8-ppm/°C relaxation oscillator is presented in 90-nm CMOS. The proposed oscillator employs an integrated error feedback and composite resistors to minimize its sensitivity to temperature variations. For a temperature range from −20 °C to 100 °C, the fabricated circuit demonstrates a frequency variation less than ±0.13%, leading to an average frequency drift of 21.8 ppm/°C. As the supply voltage changes from 0.8 to 1.2 V, the frequency variation is ±0.53%. The measured rms jitter and phase noise at 1-MHz offset are 89.27 ps and −83.29 dBc/Hz, respectively.", "title": "" }, { "docid": "a822bb33898b1fa06d299fbed09647d4", "text": "The design of a 1.8 GHz 3-stage current-starved ring oscillator with a process- and temperature- compensated current source is presented. Without post-fabrication calibration or off-chip components, the proposed low variation circuit is able to achieve a 65.1% reduction in the normalized standard deviation of its center frequency at room temperature and 85 ppm/ ° C temperature stability with no penalty in the oscillation frequency, the phase noise or the start-up time. Analysis on the impact of transistor scaling indicates that the same circuit topology can be applied to improve variability as feature size scales beyond the current deep submicron technology. Measurements taken on 167 test chips from two different lots fabricated in a standard 90 nm CMOS process show a 3x improvement in frequency variation compared to the baseline case of a conventional current-starved ring oscillator. The power and area for the proposed circuitry is 87 μW and 0.013 mm2 compared to 54 μ W and 0.01 mm 2 in the baseline case.", "title": "" } ]
[ { "docid": "16a7142a595da55de7df5253177cbcb5", "text": "The present study represents the first large-scale, prospective comparison to test whether aging out of foster care contributes to homelessness risk in emerging adulthood. A nationally representative sample of adolescents investigated by the child welfare system in 2008 to 2009 from the second cohort of the National Survey of Child and Adolescent Well-being Study (NSCAW II) reported experiences of housing problems at 18- and 36-month follow-ups. Latent class analyses identified subtypes of housing problems, including literal homelessness, housing instability, and stable housing. Regressions predicted subgroup membership based on aging out experiences, receipt of foster care services, and youth and county characteristics. Youth who reunified after out-of-home placement in adolescence exhibited the lowest probability of literal homelessness, while youth who aged out experienced similar rates of literal homelessness as youth investigated by child welfare but never placed out of home. No differences existed between groups on prevalence of unstable housing. Exposure to independent living services and extended foster care did not relate with homelessness prevention. Findings emphasize the developmental importance of families in promoting housing stability in the transition to adulthood, while questioning child welfare current focus on preparing foster youth to live.", "title": "" }, { "docid": "5277cdcfb9352fa0e8cf08ff723d34c6", "text": "Extractive style query oriented multi document summariza tion generates the summary by extracting a proper set of sentences from multiple documents based on the pre given query. This paper proposes a novel multi document summa rization framework via deep learning model. This uniform framework consists of three parts: concepts extraction, summary generation, and reconstruction validation, which work together to achieve the largest coverage of the docu ments content. A new query oriented extraction technique is proposed to concentrate distributed information to hidden units layer by layer. Then, the whole deep architecture is fi ne tuned by minimizing the information loss of reconstruc tion validation. According to the concentrated information, dynamic programming is used to seek most informative set of sentences as the summary. Experiments on three bench mark datasets demonstrate the effectiveness of the proposed framework and algorithms.", "title": "" }, { "docid": "fdabfd5f242e433bb1497ea913d67cd2", "text": "OBJECTIVES\nTo investigate the ability of cerebrospinal fluid (CSF) and plasma measures to discriminate early-stage Alzheimer disease (AD) (defined by clinical criteria and presence/absence of brain amyloid) from nondemented aging and to assess whether these biomarkers can predict future dementia in cognitively normal individuals.\n\n\nDESIGN\nEvaluation of CSF beta-amyloid(40) (Abeta(40)), Abeta(42), tau, phosphorylated tau(181), and plasma Abeta(40) and Abeta(42) and longitudinal clinical follow-up (from 1 to 8 years).\n\n\nSETTING\nLongitudinal studies of healthy aging and dementia through an AD research center.\n\n\nPARTICIPANTS\nCommunity-dwelling volunteers (n = 139) aged 60 to 91 years and clinically judged as cognitively normal (Clinical Dementia Rating [CDR], 0) or having very mild (CDR, 0.5) or mild (CDR, 1) AD dementia.\n\n\nRESULTS\nIndividuals with very mild or mild AD have reduced mean levels of CSF Abeta(42) and increased levels of CSF tau and phosphorylated tau(181). Cerebrospinal fluid Abeta(42) level completely corresponds with the presence or absence of brain amyloid (imaged with Pittsburgh Compound B) in demented and nondemented individuals. The CSF tau/Abeta(42) ratio (adjusted hazard ratio, 5.21; 95% confidence interval, 1.58-17.22) and phosphorylated tau(181)/Abeta(42) ratio (adjusted hazard ratio, 4.39; 95% confidence interval, 1.62-11.86) predict conversion from a CDR of 0 to a CDR greater than 0.\n\n\nCONCLUSIONS\nThe very mildest symptomatic stage of AD exhibits the same CSF biomarker phenotype as more advanced AD. In addition, levels of CSF Abeta(42), when combined with amyloid imaging, augment clinical methods for identifying in individuals with brain amyloid deposits whether dementia is present or not. Importantly, CSF tau/Abeta(42) ratios show strong promise as antecedent (preclinical) biomarkers that predict future dementia in cognitively normal older adults.", "title": "" }, { "docid": "88f19225cf9cd323804e8ee551bf875a", "text": "Traceability—the ability to follow the life of software artifacts—is a topic of great interest to software developers in general, and to requirements engineers and model-driven developers in particular. This article aims to bring those stakeholders together by providing an overview of the current state of traceability research and practice in both areas. As part of an extensive literature survey, we identify commonalities and differences in these areas and uncover several unresolved challenges which affect both domains. A good common foundation for further advances regarding these challenges appears to be a combination of the formal basis and the automated recording opportunities of MDD on the one hand, and the more holistic view of traceability in the requirements engineering domain on the other hand.", "title": "" }, { "docid": "970b65468b6afdf572dd8759cea3f742", "text": "We propose a framework for ensuring safe behavior of a reinforcement learning agent when the reward function may be difficult to specify. In order to do this, we rely on the existence of demonstrations from expert policies, and we provide a theoretical framework for the agent to optimize in the space of rewards consistent with its existing knowledge. We propose two methods to solve the resulting optimization: an exact ellipsoid-based method and a method in the spirit of the \"follow-the-perturbed-leader\" algorithm. Our experiments demonstrate the behavior of our algorithm in both discrete and continuous problems. The trained agent safely avoids states with potential negative effects while imitating the behavior of the expert in the other states.", "title": "" }, { "docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24", "text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.", "title": "" }, { "docid": "94877adef2f6a0fa0219e0d6494dbbc5", "text": "A miniaturized quadrature hybrid coupler, a rat-race coupler, and a 4 times 4 Butler matrix based on a newly proposed planar artificial transmission line are presented in this paper for application in ultra-high-frequency (UHF) radio-frequency identification (RFID) systems. This planar artificial transmission line is composed of microstrip quasi-lumped elements and their discontinuities and is capable of synthesizing microstrip lines with various characteristic impedances and electrical lengths. At the center frequency of the UHF RFID system, the occupied sizes of the proposed quadrature hybrid and rat-race couplers are merely 27% and 9% of those of the conventional designs. The miniaturized couplers demonstrate well-behaved wideband responses with no spurious harmonics up to two octaves. The measured results reveal excellent agreement with the simulations. Additionally, a 4 times 4 Butler matrix, which may occupy a large amount of circuit area in conventional designs, has been successfully miniaturized with the help of the proposed artificial transmission line. The circuit size of the Butler matrix is merely 21% of that of a conventional design. The experimental results show that the proposed Butler matrix features good phase control, nearly equal power splitting, and compact size and is therefore applicable to the reader modules in various RFID systems.", "title": "" }, { "docid": "31e6da3635ec5f538f15a7b3e2d95e5b", "text": "Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.", "title": "" }, { "docid": "91bdfcad73186a545028d922159f0857", "text": "Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Code with dedicated cloud instance has been made publicly available (https://goo.gl/STGMGx).", "title": "" }, { "docid": "368c874a35428310bb0d497045b411f9", "text": "Triboelectric nanogenerator (TENG) technology has emerged as a new mechanical energy harvesting technology with numerous advantages. This paper analyzes its charging behavior together with a load capacitor. Through numerical and analytical modeling, the charging performance of a TENG with a bridge rectifier under periodic external mechanical motion is completely analogous to that of a dc voltage source in series with an internal resistance. An optimum load capacitance that matches the TENGs impedance is observed for the maximum stored energy. This optimum load capacitance is theoretically detected to be linearly proportional to the charging cycle numbers and the inherent TENG capacitance. Experiments were also performed to further validate our theoretical anticipation and show the potential application of this paper in guiding real experimental designs.", "title": "" }, { "docid": "87a256b5e67b97cf4a11b5664a150295", "text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.", "title": "" }, { "docid": "828affae8c3052590591c16f02a55d91", "text": "We present short elementary proofs of the well-known Ruffini-Abel-Galois theorems on unsolvability of algebraic equations in radicals. This proof is obtained from existing expositions by stripping away material not required for the proof (but presumably required elsewhere). In particular, we do not use the terms ‘Galois group’ and even ‘group’. However, our presentation is a good way to learn (or recall) the starting idea of Galois theory: to look at how the symmetry of a polynomial is decreased when a radical is extracted. So the note provides a bridge (by showing that there is no gap) between elementary mathematics and Galois theory. The note is accessible to students familiar with polynomials, complex numbers and permutations; so the note might be interesting easy reading for professional mathematicians.", "title": "" }, { "docid": "3a5ef0db1fbbebd7c466a3b657e5e173", "text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.", "title": "" }, { "docid": "98bdcca45140bd3ba7b0c19afa06d5a9", "text": "Skeletal muscle atrophy is a debilitating response to starvation and many systemic diseases including diabetes, cancer, and renal failure. We had proposed that a common set of transcriptional adaptations underlie the loss of muscle mass in these different states. To test this hypothesis, we used cDNA microarrays to compare the changes in content of specific mRNAs in muscles atrophying from different causes. We compared muscles from fasted mice, from rats with cancer cachexia, streptozotocin-induced diabetes mellitus, uremia induced by subtotal nephrectomy, and from pair-fed control rats. Although the content of >90% of mRNAs did not change, including those for the myofibrillar apparatus, we found a common set of genes (termed atrogins) that were induced or suppressed in muscles in these four catabolic states. Among the strongly induced genes were many involved in protein degradation, including polyubiquitins, Ub fusion proteins, the Ub ligases atrogin-1/MAFbx and MuRF-1, multiple but not all subunits of the 20S proteasome and its 19S regulator, and cathepsin L. Many genes required for ATP production and late steps in glycolysis were down-regulated, as were many transcripts for extracellular matrix proteins. Some genes not previously implicated in muscle atrophy were dramatically up-regulated (lipin, metallothionein, AMP deaminase, RNA helicase-related protein, TG interacting factor) and several growth-related mRNAs were down-regulated (P311, JUN, IGF-1-BP5). Thus, different types of muscle atrophy share a common transcriptional program that is activated in many systemic diseases.", "title": "" }, { "docid": "d565220c9e4b9a4b9f8156434b8b4cd3", "text": "Decision Support Systems (DDS) have developed to exploit Information Technology (IT) to assist decision-makers in a wide variety of fields. The need to use spatial data in many of these diverse fields has led to increasing interest in the development of Spatial Decision Support Systems (SDSS) based around the Geographic Information System (GIS) technology. The paper examines the relationship between SDSS and GIS and suggests that SDSS is poised for further development owing to improvement in technology and the greater availability of spatial data.", "title": "" }, { "docid": "304f4e48ac5d5698f559ae504fc825d9", "text": "How the circadian clock regulates the timing of sleep is poorly understood. Here, we identify a Drosophila mutant, wide awake (wake), that exhibits a marked delay in sleep onset at dusk. Loss of WAKE in a set of arousal-promoting clock neurons, the large ventrolateral neurons (l-LNvs), impairs sleep onset. WAKE levels cycle, peaking near dusk, and the expression of WAKE in l-LNvs is Clock dependent. Strikingly, Clock and cycle mutants also exhibit a profound delay in sleep onset, which can be rescued by restoring WAKE expression in LNvs. WAKE interacts with the GABAA receptor Resistant to Dieldrin (RDL), upregulating its levels and promoting its localization to the plasma membrane. In wake mutant l-LNvs, GABA sensitivity is decreased and excitability is increased at dusk. We propose that WAKE acts as a clock output molecule specifically for sleep, inhibiting LNvs at dusk to promote the transition from wake to sleep.", "title": "" }, { "docid": "9e91f7e57e074ec49879598c13035d70", "text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.", "title": "" }, { "docid": "90d1d78d3d624d3cb1ecc07e8acaefd4", "text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.", "title": "" }, { "docid": "14bcbfcb6e7165e67247453944f37ac0", "text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.", "title": "" }, { "docid": "ba0d63c3e6b8807e1a13b36bc30d5d72", "text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.", "title": "" } ]
scidocsrr
7c73ce375af115507d77f51dc58f1905
Classifying Lexical-semantic Relationships by Exploiting Sense / Concept Representations
[ { "docid": "d735cfbf58094aac2fe0a324491fdfe7", "text": "We present AutoExtend, a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset/lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.", "title": "" }, { "docid": "a1a1ba8a6b7515f676ba737434c6d86a", "text": "Semantic hierarchy construction aims to build structures of concepts linked by hypernym–hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym–hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%.", "title": "" } ]
[ { "docid": "ce0649675da17105e3142ad50835fac8", "text": "Multi-agent cooperation is an important feature of the natural world. Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate. Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning (MARL) and evolutionary theory. Here, we study a particular class of multiagent problems called intertemporal social dilemmas (ISDs), where the conflict between the individual and the group is particularly sharp. By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way. To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection. We present results in two challenging environments, and interpret these in the context of cultural and ecological evolution.", "title": "" }, { "docid": "259647f0899bebc4ad67fb30a8c6f69b", "text": "Internet of Things (IoT) communication is vital for the developing of smart communities. The rapid growth of IoT depends on reliable wireless networks. The evolving 5G cellular system addresses this challenge by adopting cloud computing technology in Radio Access Network (RAN); namely Cloud RAN or CRAN. CRAN enables better scalability, flexibility, and performance that allows 5G to provide connectivity for the vast volume of IoT devices envisioned for smart cities. This work investigates the load balance (LB) problem in CRAN, with the goal of reducing latencies experienced by IoT communications. Eight practical LB algorithms are studied and evaluated in CRAN environment, based on real cellular network traffic characteristics provided by Nokia Research. Experiment results on queue-length analysis show that the simple, light-weight queue-based LB is almost as effectively as the much more complex waiting-time-based LB. We believe that this study is significant in enabling 5G networks for providing IoT communication backbone in the emerging smart communities; it also has wide applications in other distributed systems.", "title": "" }, { "docid": "9882c528dce5e9bb426d057ee20a520c", "text": "The use of herbal medicinal products and supplements has increased tremendously over the past three decades with not less than 80% of people worldwide relying on them for some part of primary healthcare. Although therapies involving these agents have shown promising potential with the efficacy of a good number of herbal products clearly established, many of them remain untested and their use are either poorly monitored or not even monitored at all. The consequence of this is an inadequate knowledge of their mode of action, potential adverse reactions, contraindications, and interactions with existing orthodox pharmaceuticals and functional foods to promote both safe and rational use of these agents. Since safety continues to be a major issue with the use of herbal remedies, it becomes imperative, therefore, that relevant regulatory authorities put in place appropriate measures to protect public health by ensuring that all herbal medicines are safe and of suitable quality. This review discusses toxicity-related issues and major safety concerns arising from the use of herbal medicinal products and also highlights some important challenges associated with effective monitoring of their safety.", "title": "" }, { "docid": "09b86e959a0b3fa28f9d3462828bbc31", "text": "Industry 4.0 has become more popular due to recent developments in cyber-physical systems, big data, cloud computing, and industrial wireless networks. Intelligent manufacturing has produced a revolutionary change, and evolving applications, such as product lifecycle management, are becoming a reality. In this paper, we propose and implement a manufacturing big data solution for active preventive maintenance in manufacturing environments. First, we provide the system architecture that is used for active preventive maintenance. Then, we analyze the method used for collection of manufacturing big data according to the data characteristics. Subsequently, we perform data processing in the cloud, including the cloud layer architecture, the real-time active maintenance mechanism, and the offline prediction and analysis method. Finally, we analyze a prototype platform and implement experiments to compare the traditionally used method with the proposed active preventive maintenance method. The manufacturing big data method used for active preventive maintenance has the potential to accelerate implementation of Industry 4.0.", "title": "" }, { "docid": "49b0842c9b7e6627b12faa1b821d4c19", "text": "Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques – guided backpropagation and occlusion – to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.", "title": "" }, { "docid": "cf5f21e8f0d2ba075f2061c7a69b622a", "text": "This article presents guiding principles for the assessment of competence developed by the members of the American Psychological Association’s Task Force on Assessment of Competence in Professional Psychology. These principles are applicable to the education, training, and credentialing of professional psychologists, and to practicing psychologists across the professional life span. The principles are built upon a review of competency assessment models, including practices in both psychology and other professions. These principles will help to ensure that psychologists reinforce the importance of a culture of competence. The implications of the principles for professional psychology also are highlighted.", "title": "" }, { "docid": "2c834988686bf2d28ba7668ffaf14b0e", "text": "Revealing the latent community structure, which is crucial to understanding the features of networks, is an important problem in network and graph analysis. During the last decade, many approaches have been proposed to solve this challenging problem in diverse ways, i.e. different measures or data structures. Unfortunately, experimental reports on existing techniques fell short in validity and integrity since many comparisons were not based on a unified code base or merely discussed in theory. We engage in an in-depth benchmarking study of community detection in social networks. We formulate a generalized community detection procedure and propose a procedure-oriented framework for benchmarking. This framework enables us to evaluate and compare various approaches to community detection systematically and thoroughly under identical experimental conditions. Upon that we can analyze and diagnose the inherent defect of existing approaches deeply, and further make effective improvements correspondingly. We have re-implemented ten state-of-the-art representative algorithms upon this framework and make comprehensive evaluations of multiple aspects, including the efficiency evaluation, performance evaluations, sensitivity evaluations, etc. We discuss their merits and faults in depth, and draw a set of take-away interesting conclusions. In addition, we present how we can make diagnoses for these algorithms resulting in significant improvements.", "title": "" }, { "docid": "27d2326844c4eae0e3bdd9a174a9352e", "text": "A straight-line drawing of a plane graph is called an open rectangle-of-influence drawing if there is no vertex in the proper inside of the axis-parallel rectangle defined by the two ends of every edge. In an inner triangulated plane graph, every inner face is a triangle although the outer face is not always a triangle. In this paper, we first obtain a sufficient condition for an inner triangulated plane graph G to have an open rectangle-of-influence drawing; the condition is expressed in terms of a labeling of angles of a subgraph of G. We then present an O(n/log n)-time algorithm to examine whether G satisfies the condition and, if so, construct an open rectangle-of-influence drawing of G on an (n − 1) × (n − 1) integer grid, where n is the number of vertices in G.", "title": "" }, { "docid": "87123af7c3cb813b652c6f1edc8e8150", "text": "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.", "title": "" }, { "docid": "0262eec8de03b028877c7a95e8bd7ea3", "text": "A planar monopole having a small size yet providing two wide bands for covering the eight-band LTE/GSM/UMTS operation in the mobile phone is presented. The small-size yet wideband operation is achieved by exciting the antenna's wide radiating plate using a coupling feed and short-circuiting it to the system ground plane of the mobile phone through a long meandered strip as an inductive shorting strip. The coupling feed leads to a wide operating band to cover the frequency range of 1710-2690 MHz for the GSM1800/1900/UMTS/LTE2300/2500 operation. The inductive shorting strip results in the generation of a wide operating band to cover the frequency range of 698-960 MHz for the LTE700/GSM850/900 operation. The planar monopole can be directly printed on the no-ground portion of the system circuit board of the mobile phone and is promising to be integrated with a practical loudspeaker. The antenna's radiating plate can also be folded into a thin structure (3 mm only) to occupy a small volume of 3 × 6 × 40 mm3 (0.72 cm3) for the eight-band LTE/GSM/UMTS operation; in this case, including the 8-mm feed gap, the antenna shows a low profile of 14 mm to the ground plane of the mobile phone. The proposed antenna, including its planar and folded structures, are suitable for slim mobile phone applications.", "title": "" }, { "docid": "8981e058b13a154e7d85d30de0dfc3f7", "text": "Game engine is the core of game development. Unity3D is a game engine that supports the development on multiple platforms including web, mobiles, etc. The main technology characters of Unity3D are introduced firstly. The component model, event-driven model and class relationships in Unity3D are analyzed. Finally, a generating NPCs algorithm and a shooting algorithm are respectively presented to show common key technologies in Unity3D.", "title": "" }, { "docid": "0851caf6599f97bbeaf68b57e49b4da5", "text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.", "title": "" }, { "docid": "b15c689ff3dd7b2e7e2149e73b5451ac", "text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.", "title": "" }, { "docid": "7da0a472f0a682618eccbfd4229ca14f", "text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.", "title": "" }, { "docid": "74e40c5cb4e980149906495da850d376", "text": "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types—not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.", "title": "" }, { "docid": "6859a7d2838708a2361e2e0b0cf1819c", "text": "In edge computing, content and service providers aim at enhancing user experience by providing services closer to the user. At the same time, infrastructure providers such as access ISPs aim at utilizing their infrastructure by selling edge resources to these content and service providers. In this context, auctions are widely used to set a price that reflects supply and demand in a fair way. In this work, we propose RAERA, the first robust auction scheme for edge resource allocation that is suitable to work with the market uncertainty typical for edge resources---here, customers typically have different valuation distribution for a wide range of heterogeneous resources. Additionally, RAERA encourages truthful bids and allows the infrastructure provider to maximize its break-even profit. Our preliminary evaluations highlight that REARA offers a time dependent fair price. Sellers can achieve higher revenue in the range of 5%-15% irrespective of varying demands and the buyers pay up to 20% lower than their top bid amount.", "title": "" }, { "docid": "702a4a841f24f3b9464989360ac44b41", "text": "Small-cell lung cancer (SCLC) is an aggressive malignancy associated with a poor prognosis. First-line treatment has remained unchanged for decades, and a paucity of effective treatment options exists for recurrent disease. Nonetheless, advances in our understanding of SCLC biology have led to the development of novel experimental therapies. Poly [ADP-ribose] polymerase (PARP) inhibitors have shown promise in preclinical models, and are under clinical investigation in combination with cytotoxic therapies and inhibitors of cell-cycle checkpoints.Preclinical data indicate that targeting of histone-lysine N-methyltransferase EZH2, a regulator of chromatin remodelling implicated in acquired therapeutic resistance, might augment and prolong chemotherapy responses. High expression of the inhibitory Notch ligand Delta-like protein 3 (DLL3) in most SCLCs has been linked to expression of Achaete-scute homologue 1 (ASCL1; also known as ASH-1), a key transcription factor driving SCLC oncogenesis; encouraging preclinical and clinical activity has been demonstrated for an anti-DLL3-antibody–drug conjugate. The immune microenvironment of SCLC seems to be distinct from that of other solid tumours, with few tumour-infiltrating lymphocytes and low levels of the immune-checkpoint protein programmed cell death 1 ligand 1 (PD-L1). Nonetheless, immunotherapy with immune-checkpoint inhibitors holds promise for patients with this disease, independent of PD-L1 status. Herein, we review the progress made in uncovering aspects of the biology of SCLC and its microenvironment that are defining new therapeutic strategies and offering renewed hope for patients.", "title": "" }, { "docid": "611c8ce42410f8f678aa5cb5c0de535b", "text": "User simulators are a principal offline method for training and evaluating human-computer dialog systems. In this paper, we examine simple sequence-to-sequence neural network architectures for training end-to-end, natural language to natural language, user simulators, using only raw logs of previous interactions without any additional human labelling. We compare the neural network-based simulators with a language model (LM)-based approach for creating natural language user simulators. Using both an automatic evaluation using LM perplexity and a human evaluation, we demonstrate that the sequence-tosequence approaches outperform the LM-based method. We show correlation between LM perplexity and the human evaluation on this task, and discuss the benefits of different neural network architecture variations.", "title": "" }, { "docid": "9ead26b8d3006501377a2fa643407d00", "text": "Face recognition systems are susceptible to presentation attacks such as printed photo attacks, replay attacks, and 3D mask attacks. These attacks, primarily studied in visible spectrum, aim to obfuscate or impersonate a person's identity. This paper presents a unique multispectral video face database for face presentation attack using latex and paper masks. The proposed Multispectral Latex Mask based Video Face Presentation Attack (MLFP) database contains 1350 videos in visible, near infrared, and thermal spectrums. Since the database consists of videos of subjects without any mask as well as wearing ten different masks, the effect of identity concealment is analyzed in each spectrum using face recognition algorithms. We also present the performance of existing presentation attack detection algorithms on the proposed MLFP database. It is observed that the thermal imaging spectrum is most effective in detecting face presentation attacks.", "title": "" }, { "docid": "5460958ae8ad23fb762593a22b8aad07", "text": "The paper presents an artificial neural network based approach in support of cash demand forecasting for automatic teller machine (ATM). On the start phase a three layer feed-forward neural network was trained using Levenberg-Marquardt algorithm and historical data sets. Then ANN was retuned every week using the last observations from ATM. The generalization properties of the ANN were improved using regularization term which penalizes large values of the ANN weights. Regularization term was adapted online depending on complexity of relationship between input and output variables. Performed simulation and experimental tests have showed good forecasting capacities of ANN. At current stage the proposed procedure is in the implementing phase for cash management tasks in ATM network. Key-Words: neural networks, automatic teller machine, cash forecasting", "title": "" } ]
scidocsrr
63e603175c9009da16d78693caab1772
Spectral and Energy-Efficient Wireless Powered IoT Networks: NOMA or TDMA?
[ { "docid": "1a615a022c441f413fcbdb3dbff9e66d", "text": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.", "title": "" }, { "docid": "29360e31131f37830e0d6271bab63a6e", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.", "title": "" } ]
[ { "docid": "93b87e8dde0de0c1b198f6a073858d80", "text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.", "title": "" }, { "docid": "45494f14c2d9f284dd3ad3a5be49ca78", "text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.", "title": "" }, { "docid": "fd9461aeac51be30c9d0fbbba298a79b", "text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.", "title": "" }, { "docid": "cb266f07461a58493d35f75949c4605e", "text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.", "title": "" }, { "docid": "9a7c915803c84bc2270896bd82b4162d", "text": "In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.", "title": "" }, { "docid": "4e6709bf897352c4e8b24a5b77e4e2c5", "text": "Large-scale classification is an increasingly critical Big Data problem. So far, however, very little has been published on how this is done in practice. In this paper we describe Chimera, our solution to classify tens of millions of products into 5000+ product types at WalmartLabs. We show that at this scale, many conventional assumptions regarding learning and crowdsourcing break down, and that existing solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in-house analysts), and crowdsourcing to achieve accurate, continuously improving, and cost-effective classification. We discuss a set of lessons learned for other similar Big Data systems. In particular, we argue that at large scales crowdsourcing is critical, but must be used in combination with learning, rules, and in-house analysts. We also argue that using rules (in conjunction with learning) is a must, and that more research attention should be paid to helping analysts create and manage (tens of thousands of) rules more effectively.", "title": "" }, { "docid": "344e5742cc3c1557589cea05b429d743", "text": "Herein we present a novel big-data framework for healthcare applications. Healthcare data is well suited for bigdata processing and analytics because of the variety, veracity and volume of these types of data. In recent times, many areas within healthcare have been identified that can directly benefit from such treatment. However, setting up these types of architecture is not trivial. We present a novel approach of building a big-data framework that can be adapted to various healthcare applications with relative use, making this a one-stop “Big-Data-Healthcare-in-a-Box”.", "title": "" }, { "docid": "ddc18f2d129d95737b8f0591560d202d", "text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.", "title": "" }, { "docid": "48aa68862748ab502f3942300b4d8e1e", "text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.", "title": "" }, { "docid": "b4a2c3679fe2490a29617c6a158b9dbc", "text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.", "title": "" }, { "docid": "77b78ec70f390289424cade3850fc098", "text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.", "title": "" }, { "docid": "1c915d0ffe515aa2a7c52300d86e90ba", "text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.", "title": "" }, { "docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc", "text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting", "title": "" }, { "docid": "82a0169afe20e2965f7fdd1a8597b7d3", "text": "Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.", "title": "" }, { "docid": "387634b226820f3aa87fede466acd6c2", "text": "Objectives To evaluate the ability of a short-form FCE to predict future timely and sustained return-to-work. Methods A prospective cohort study was conducted using data collected during a cluster RCT. Subject performance on the items in the short-form FCE was compared to administrative recovery outcomes from a workers’ compensation database. Outcomes included days to claim closure, days to time loss benefit suspension and future recurrence (defined as re-opening a closed claim, restarting benefits, or filing a new claim for injury to the same body region). Analysis included multivariable Cox and logistic regression using a risk factor modeling strategy. Potential confounders included age, sex, injury duration, and job attachment status, among others. Results The sample included 147 compensation claimants with a variety of musculoskeletal injuries. Subjects who demonstrated job demand levels on all FCE items were more likely to have their claims closed (adjusted Hazard Ratio 5.52 (95% Confidence Interval 3.42–8.89), and benefits suspended (adjusted Hazard Ratio 5.45 (95% Confidence Interval 2.73–10.85) over the follow-up year. The proportion of variance explained by the FCE ranged from 18 to 27%. FCE performance was not significantly associated with future recurrence. Conclusion A short-form FCE appears to provide useful information for predicting time to recovery as measured through administrative outcomes, but not injury recurrence. The short-form FCE may be an efficient option for clinicians using FCE in the management of injured workers.", "title": "" }, { "docid": "9fd247bb0f45d09e11c05fca48372ee8", "text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.", "title": "" }, { "docid": "0d0fae25e045c730b68d63e2df1dfc7f", "text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.", "title": "" }, { "docid": "04373474e0d9902fdee169492ece6dd0", "text": "The development of ivermectin as a complementary vector control tool will require good quality evidence. This paper reviews the different eco-epidemiological contexts in which mass drug administration with ivermectin could be useful. Potential scenarios and pharmacological strategies are compared in order to help guide trial design. The rationale for a particular timing of an ivermectin-based tool and some potentially useful outcome measures are suggested.", "title": "" }, { "docid": "5c1d6a2616a54cd8d8316b8d37f0147d", "text": "Cadmium (Cd) is a toxic, nonessential transition metal and contributes a health risk to humans, including various cancers and cardiovascular diseases; however, underlying molecular mechanisms remain largely unknown. Cells transmit information to the next generation via two distinct ways: genetic and epigenetic. Chemical modifications to DNA or histone that alters the structure of chromatin without change of DNA nucleotide sequence are known as epigenetics. These heritable epigenetic changes include DNA methylation, post-translational modifications of histone tails (acetylation, methylation, phosphorylation, etc), and higher order packaging of DNA around nucleosomes. Apart from DNA methyltransferases, histone modification enzymes such as histone acetyltransferase, histone deacetylase, and methyltransferase, and microRNAs (miRNAs) all involve in these epigenetic changes. Recent studies indicate that Cd is able to induce various epigenetic changes in plant and mammalian cells in vitro and in vivo. Since aberrant epigenetics plays a critical role in the development of various cancers and chronic diseases, Cd may cause the above-mentioned pathogenic risks via epigenetic mechanisms. Here we review the in vitro and in vivo evidence of epigenetic effects of Cd. The available findings indicate that epigenetics occurred in association with Cd induction of malignant transformation of cells and pathological proliferation of tissues, suggesting that epigenetic effects may play a role in Cd toxic, particularly carcinogenic effects. The future of environmental epigenomic research on Cd should include the role of epigenetics in determining long-term and late-onset health effects following Cd exposure.", "title": "" } ]
scidocsrr
84e93effd1fc051cbfd1fdda0017d7f0
A review on deep learning for recommender systems: challenges and remedies
[ { "docid": "659b3c56790b92b5b02dcdbab76bef0c", "text": "Recommender systems based on deep learning technology pay huge attention recently. In this paper, we propose a collaborative filtering based recommendation algorithm that utilizes the difference of similarities among users derived from different layers in stacked denoising autoencoders. Since different layers in a stacked autoencoder represent the relationships among items with rating at different levels of abstraction, we can expect to make recommendations more novel, various and serendipitous, compared with a normal collaborative filtering using single similarity. The results of experiments using MovieLens dataset show that the proposed recommendation algorithm can improve the diversity of recommendation lists without great loss of accuracy.", "title": "" }, { "docid": "89fd46da8542a8ed285afb0cde9cc236", "text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.", "title": "" } ]
[ { "docid": "e2666b0eed30a4eed2ad0cde07324d73", "text": "It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population.", "title": "" }, { "docid": "83ad15e2ffeebb21705b617646dc4ed7", "text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.", "title": "" }, { "docid": "28ff3b1e9f29d7ae4b57f6565330cde5", "text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.", "title": "" }, { "docid": "ce5efa83002cee32a5ef8b8b73b81a60", "text": "Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.", "title": "" }, { "docid": "255a155986548bb873ee0bc88a00222b", "text": "Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.", "title": "" }, { "docid": "91bbea10b8df8a708b65947c8a8832dc", "text": "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.", "title": "" }, { "docid": "9083d1159628f0b9a363aca5dea47591", "text": "Cocitation and co-word methods have long been used to detect and track emerging topics in scientific literature, but both have weaknesses. Recently, while many researchers have adopted generative probabilistic models for topic detection and tracking, few have compared generative probabilistic models with traditional cocitation and co-word methods in terms of their overall performance. In this article, we compare the performance of hierarchical Dirichlet process (HDP), a promising generative probabilistic model, with that of the 2 traditional topic detecting and tracking methods— cocitation analysis and co-word analysis. We visualize and explore the relationships between topics identified by the 3 methods in hierarchical edge bundling graphs and time flow graphs. Our result shows that HDP is more sensitive and reliable than the other 2 methods in both detecting and tracking emerging topics. Furthermore, we demonstrate the important topics and topic evolution trends in the literature of terrorism research with the HDP method.", "title": "" }, { "docid": "4c82ba56d6532ddc57c2a2978de7fe5a", "text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.", "title": "" }, { "docid": "179298e4aa5fbd8a9ea08bde263ceaf5", "text": "Healthcare applications are considered as promising fields for wireless sensor networks, where patients can be monitored using wireless medical sensor networks (WMSNs). Current WMSN healthcare research trends focus on patient reliable communication, patient mobility, and energy-efficient routing, as a few examples. However, deploying new technologies in healthcare applications without considering security makes patient privacy vulnerable. Moreover, the physiological data of an individual are highly sensitive. Therefore, security is a paramount requirement of healthcare applications, especially in the case of patient privacy, if the patient has an embarrassing disease. This paper discusses the security and privacy issues in healthcare application using WMSNs. We highlight some popular healthcare projects using wireless medical sensor networks, and discuss their security. Our aim is to instigate discussion on these critical issues since the success of healthcare application depends directly on patient security and privacy, for ethic as well as legal reasons. In addition, we discuss the issues with existing security mechanisms, and sketch out the important security requirements for such applications. In addition, the paper reviews existing schemes that have been recently proposed to provide security solutions in wireless healthcare scenarios. Finally, the paper ends up with a summary of open security research issues that need to be explored for future healthcare applications using WMSNs.", "title": "" }, { "docid": "3e98e933aff32193fe4925f39fd04198", "text": "Estimating surface normals is an important task in computer vision, e.g. in surface reconstruction, registration and object detection. In stereo vision, the error of depth reconstruction increases quadratically with distance. This makes estimation of surface normals an especially demanding task. In this paper, we analyze how error propagates from noisy disparity data to the orientation of the estimated surface normal. Firstly, we derive a transformation for normals between disparity space and world coordinates. Afterwards, the propagation of disparity noise is analyzed by means of a Monte Carlo method. Normal reconstruction at a pixel position requires to consider a certain neighborhood of the pixel. The extent of this neighborhood affects the reconstruction error. Our method allows to determine the optimal neighborhood size required to achieve a pre specified deviation of the angular reconstruction error, defined by a confidence interval. We show that the reconstruction error only depends on the distance of the surface point to the camera, the pixel distance to the principal point in the image plane and the angle at which the viewing ray intersects the surface.", "title": "" }, { "docid": "7db9cf29dd676fa3df5a2e0e95842b6e", "text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.", "title": "" }, { "docid": "6b064b9f4c90a60fab788f9d5aee8b58", "text": "Extracorporeal photopheresis (ECP) is a technique that was developed > 20 years ago to treat erythrodermic cutaneous T-cell lymphoma (CTCL). The technique involves removal of peripheral blood, separation of the buffy coat, and photoactivation with a photosensitizer and ultraviolet A irradiation before re-infusion of cells. More than 1000 patients with CTCL have been treated with ECP, with response rates of 31-100%. ECP has been used in a number of other conditions, most widely in the treatment of chronic graft-versus-host disease (cGvHD) with response rates of 29-100%. ECP has also been used in several other autoimmune diseases including acute GVHD, solid organ transplant rejection and Crohn's disease, with some success. ECP is a relatively safe procedure, and side-effects are typically mild and transient. Severe reactions including vasovagal syncope or infections are uncommon. This is very valuable in conditions for which alternative treatments are highly toxic. The mechanism of action of ECP remains elusive. ECP produces a number of immunological changes and in some patients produces immune homeostasis with resultant clinical improvement. ECP is available in seven centres in the UK. Experts from all these centres formed an Expert Photopheresis Group and published the UK consensus statement for ECP in 2008. All centres consider patients with erythrodermic CTCL and steroid-refractory cGvHD for treatment. The National Institute for Health and Clinical Excellence endorsed the use of ECP for CTCL and suggested a need for expansion while recommending its use in specialist centres. ECP is safe, effective, and improves quality of life in erythrodermic CTCL and cGvHD, and should be more widely available for these patients.", "title": "" }, { "docid": "598dd39ec35921242b94f17e24b30389", "text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.", "title": "" }, { "docid": "5bde29ce109714f623ae9d69184a8708", "text": "Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near–far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms.", "title": "" }, { "docid": "e8a0e37fcb90f43785da710792b02c3c", "text": "As the use of Twitter has become more commonplace throughout many nations, its role in political discussion has also increased. This has been evident in contexts ranging from general political discussion through local, state, and national elections (such as in the 2010 Australian elections) to protests and other activist mobilisation (for example in the current uprisings in Tunisia, Egypt, and Yemen, as well as in the controversy around Wikileaks). Research into the use of Twitter in such political contexts has also developed rapidly, aided by substantial advancements in quantitative and qualitative methodologies for capturing, processing, analysing, and visualising Twitter updates by large groups of users. Recent work has especially highlighted the role of the Twitter hashtag – a short keyword, prefixed with the hash symbol ‘#’ – as a means of coordinating a distributed discussion between more or less large groups of users, who do not need to be connected through existing ‘follower’ networks. Twitter hashtags – such as ‘#ausvotes’ for the 2010 Australian elections, ‘#londonriots’ for the coordination of information and political debates around the recent unrest in London, or ‘#wikileaks’ for the controversy around Wikileaks thus aid the formation of ad hoc publics around specific themes and topics. They emerge from within the Twitter community – sometimes as a result of pre-planning or quickly reached consensus, sometimes through protracted debate about what the appropriate hashtag for an event or topic should be (which may also lead to the formation of competing publics using different hashtags). Drawing on innovative methodologies for the study of Twitter content, this paper examines the use of hashtags in political debate in the context of a number of major case studies.", "title": "" }, { "docid": "f117503bf48ea9ddf575dedf196d3bcd", "text": "In recent years, prison officials have increasingly turned to solitary confinement as a way to manage difficult or dangerous prisoners. Many of the prisoners subjected to isolation, which can extend for years, have serious mental illness, and the conditions of solitary confinement can exacerbate their symptoms or provoke recurrence. Prison rules for isolated prisoners, however, greatly restrict the nature and quantity of mental health services that they can receive. In this article, we describe the use of isolation (called segregation by prison officials) to confine prisoners with serious mental illness, the psychological consequences of such confinement, and the response of U.S. courts and human rights experts. We then address the challenges and human rights responsibilities of physicians confronting this prison practice. We conclude by urging professional organizations to adopt formal positions against the prolonged isolation of prisoners with serious mental illness.", "title": "" }, { "docid": "1dd4a95adcd4f9e7518518148c3605ac", "text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.", "title": "" }, { "docid": "e7d8f97d7d76ae089842e602b91df21c", "text": "In this paper, we propose a novel text representation paradigm and a set of follow-up text representation models based on cognitive psychology theories. The intuition of our study is that the knowledge implied in a large collection of documents may improve the understanding of single documents. Based on cognitive psychology theories, we propose a general text enrichment framework, study the key factors to enable activation of implicit information, and develop new text representation methods to enrich text with the implicit information. Our study aims to mimic some aspects of human cognitive procedure in which given stimulant words serve to activate understanding implicit concepts. By incorporating human cognition into text representation, the proposed models advance existing studies by mining implicit information from given text and coordinating with most existing text representation approaches at the same time, which essentially bridges the gap between explicit and implicit information. Experiments on multiple tasks show that the implicit information activated by our proposed models matches human intuition and significantly improves the performance of the text mining tasks as well.", "title": "" }, { "docid": "2a98cfbd1036ef0cdccbc491b33a9af4", "text": "Optimal facility layout is of the factors that can affect the efficiency of any organization and annually millions of dollars costs are created for it or profits are saved. Studies done in the field of facility layout can be classified into two general categories: Facility layout issues in static position and facility layout issues in dynamic position. Because that facilities dynamic layout problem is realistic this paper investigates it and tries to consider all necessary aspects of this issue to become it more practical. In this regard, this research has developed three objectives model and tries to simultaneously minimize total operating costs and also production time. Since the calculation of production time using analytical relations is impossible, this research using simulation and regression analysis of a statistical correlation tries to measure production time. So the developed model is a combination of analytical and statistical relationships. The proposed model is of NP-HARD issues, so that even finding an optimal solution for its small scale is very difficult and time consuming. Multi-objective meta-heuristic NSGAII and NRGA algorithms are used to solve the problem. Since the outputs of meta-heuristic algorithms are highly dependent on the algorithms input parameters, Taguchi experimental design method is also used to set parameters. Alsoin order to assess the efficiency of provided procedures, the proposed method has been analyzed on generated pilot issues with various aspects. The results of the comparing algorithms on several criteria, consistently show the superiority of the NSGA-II than NRGA in problem solving.", "title": "" } ]
scidocsrr
dba48ea89c1a44ac1955dfd6e31a9f91
Large Scale Log Analytics through ELK
[ { "docid": "5a63b6385068fbc24d1d79f9a6363172", "text": "Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning.", "title": "" } ]
[ { "docid": "17ac85242f7ee4bc4991e54403e827c4", "text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.", "title": "" }, { "docid": "999d111ff3c65f48f63ee51bd2230172", "text": "We present techniques for gathering data that expose errors of automatic predictive models. In certain common settings, traditional methods for evaluating predictive models tend to miss rare but important errors—most importantly, cases for which the model is confident of its prediction (but wrong). In this article, we present a system that, in a game-like setting, asks humans to identify cases that will cause the predictive model-based system to fail. Such techniques are valuable in discovering problematic cases that may not reveal themselves during the normal operation of the system and may include cases that are rare but catastrophic. We describe the design of the system, including design iterations that did not quite work. In particular, the system incentivizes humans to provide examples that are difficult for the model to handle by providing a reward proportional to the magnitude of the predictive model's error. The humans are asked to “Beat the Machine” and find cases where the automatic model (“the Machine”) is wrong. Experiments show that the humans using Beat the Machine identify more errors than do traditional techniques for discovering errors in predictive models, and, indeed, they identify many more errors where the machine is (wrongly) confident it is correct. Furthermore, those cases the humans identify seem to be not simply outliers, but coherent areas missed completely by the model. Beat the Machine identifies the “unknown unknowns.” Beat the Machine has been deployed at an industrial scale by several companies. The main impact has been that firms are changing their perspective on and practice of evaluating predictive models.\n “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.”\n --Donald Rumsfeld", "title": "" }, { "docid": "35586c00530db3fd928512134b4927ec", "text": "Basic definitions concerning the multi-layer feed-forward neural networks are given. The back-propagation training algorithm is explained. Partial derivatives of the objective function with respect to the weight and threshold coefficients are derived. These derivatives are valuable for an adaptation process of the considered neural network. Training and generalisation of multi-layer feed-forward neural networks are discussed. Improvements of the standard back-propagation algorithm are reviewed. Example of the use of multi-layer feed-forward neural networks for prediction of carbon-13 NMR chemical shifts of alkanes is given. Further applications of neural networks in chemistry are reviewed. Advantages and disadvantages of multilayer feed-forward neural networks are discussed.", "title": "" }, { "docid": "37552cc90e02204afdd362a7d5978047", "text": "In this talk we introduce visible light communication and discuss challenges and techniques to improve the performance of white organic light emitting diode (OLED) based systems.", "title": "" }, { "docid": "518cb733bfbb746315498c1409d118c5", "text": "BACKGROUND\nAndrogenetic alopecia (AGA) is a common form of scalp hair loss that affects up to 50% of males between 18 and 40 years old. Several molecules are commonly used for the treatment of AGA, acting on different steps of its pathogenesis (Minoxidil, Finasteride, Serenoa repens) and show some side effects. In literature, on the basis of hypertrichosis observed in patients treated with analogues of prostaglandin PGF2a, it was supposed that prostaglandins would have an important role in the hair growth: PGE and PGF2a play a positive role, while PGD2 a negative one.\n\n\nOBJECTIVE\nWe carried out a pilot study to evaluate the efficacy of topical cetirizine versus placebo in patients with AGA.\n\n\nPATIENTS AND METHODS\nA sample of 85 patients was recruited, of which 67 were used to assess the effectiveness of the treatment with topical cetirizine, while 18 were control patients.\n\n\nRESULTS\nWe found that the main effect of cetirizine was an increase in total hair density, terminal hair density and diameter variation from T0 to T1, while the vellus hair density shows an evident decrease. The use of a molecule as cetirizine, with no notable side effects, makes possible a good compliance by patients.\n\n\nCONCLUSION\nOur results have shown that topical cetirizine 1% is responsible for a significant improvement of the initial framework of AGA.", "title": "" }, { "docid": "4d2dad29f0f02d448c78b7beda529022", "text": "This paper proposes a novel diagnosis method for detection and discrimination of two typical mechanical failures in induction motors by stator current analysis: load torque oscillations and dynamic rotor eccentricity. A theoretical analysis shows that each fault modulates the stator current in a different way: torque oscillations lead to stator current phase modulation, whereas rotor eccentricities produce stator current amplitude modulation. The use of traditional current spectrum analysis involves identical frequency signatures with the two fault types. A time-frequency analysis of the stator current with the Wigner distribution leads to different fault signatures that can be used for a more accurate diagnosis. The theoretical considerations and the proposed diagnosis techniques are validated on experimental signals.", "title": "" }, { "docid": "411d64804b8271b426521db5769cdb6f", "text": "This paper presents APT, a localization system for outdoor pedestrians with smartphones. APT performs better than the built-in GPS module of the smartphone in terms of accuracy. This is achieved by introducing a robust dead reckoning algorithm and an error-tolerant algorithm for map matching. When the user is walking with the smartphone, the dead reckoning algorithm monitors steps and walking direction in real time. It then reports new steps and turns to the map-matching algorithm. Based on updated information, this algorithm adjusts the user's location on a map in an error-tolerant manner. If location ambiguity among several routes occurs after adjustments, the GPS module is queried to help eliminate this ambiguity. Evaluations in practice show that the error of our system is less than 1/2 that of GPS.", "title": "" }, { "docid": "fae9789def98f0f5813ed4a644043c2f", "text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Nowadays people are more interested to express and share their views, feedbacks, suggestions, and opinions about a particular topic on the web. People and company rely more on online opinions about products and services for their decision making. A major problem in identifying the opinion classification is high dimensionality of the feature space. Most of these features are irrelevant, redundant, and noisy which affects the performance of the classifier. Therefore, feature selection is an essential step in the fake review detection to reduce the dimensionality of the feature space and to improve accuracy. In this paper, binary artificial bee colony (BABC) with KNN is proposed to solve feature selection problem for sentiment classification. The experimental results demonstrate that the proposed method selects more informative features set compared to the competitive methods as it attains higher classification accuracy.", "title": "" }, { "docid": "ef53a10864facc669b9ac5218f394bca", "text": "Emotion hacking virtual reality (EH-VR) system is an interactive system that hacks one's heartbeat and controls it to accelerate scary VR experience. The EH-VR system provides vibrotactile biofeedback, which resembles a heartbeat, from the floor. The system determines false heartbeat frequency by detecting user's heart rate in real time and calculates false heart rate, which is faster than the one observed according to the quadric equation model. With the system, we demonstrate \"Pressure of unknown\" which is a CG VR space originally created to express the metaphor of scare. A user experiences this space by using a wheel chair as a controller to walk through a VR world displayed via HMD while receiving vibrotac-tile feedback of false heartbeat calculated from its own heart rate from the floor.", "title": "" }, { "docid": "cb6704ade47db83a6338e43897d72956", "text": "Renewable energy sources are essential paths towards sustainable development and CO2 emission reduction. For example, the European Union has set the target of achieving 22% of electricity generation from renewable sources by 2010. However, the extensive use of this energy source is being avoided by some technical problems as fouling and slagging in the surfaces of boiler heat exchangers. Although these phenomena were extensively studied in the last decades in order to optimize the behaviour of large coal power boilers, a simple, general and effective method for fouling control has not been developed. For biomass boilers, the feedstock variability and the presence of new components in ash chemistry increase the fouling influence in boiler performance. In particular, heat transfer is widely affected and the boiler capacity becomes dramatically reduced. Unfortunately, the classical approach of regular sootblowing cycles becomes clearly insufficient for them. Artificial Intelligence (AI) provides new means to undertake this problem. This paper illustrates a methodology based on Neural Networks (NNs) and Fuzzy-Logic Expert Systems to select the moment for activating sootblowing in an industrial biomass boiler. The main aim is to minimize the boiler energy and efficiency losses with a proper sootblowing activation. Although the NN type used in this work is well-known and the Hybrid Systems had been extensively used in the last decade, the excellent results obtained in the use of AI in industrial biomass boilers control with regard to previous approaches makes this work a novelty. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce30d02bab5d67559f9220c1db11e80d", "text": "Thalidomide was originally used to treat morning sickness, but was banned in the 1960s for causing serious congenital birth defects. Remarkably, thalidomide was subsequently discovered to have anti-inflammatory and anti-angiogenic properties, and was identified as an effective treatment for multiple myeloma. A series of immunomodulatory drugs — created by chemical modification of thalidomide — have been developed to overcome the original devastating side effects. Their powerful anticancer properties mean that these drugs are now emerging from thalidomide's shadow as useful anticancer agents.", "title": "" }, { "docid": "acf4f5fa5ae091b5e72869213deb643e", "text": "A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.", "title": "" }, { "docid": "e573cffab31721c14725de3e2608eabf", "text": "Sketching on paper is a quick and easy way to communicate ideas. However, many sketch-based systems require people to draw in contrived ways instead of sketching freely as they would on paper. NaturaSketch affords a more natural interface through multiple strokes that overlap, cross, and connect. It also features a meshing algorithm to support multiple strokes of different classifications, which lets users design complex 3D shapes from sketches drawn over existing images. To provide a familiar workflow for object design, a set of sketch annotations can also specify modeling and editing operations. NaturaSketch empowers designers to produce a variety of models quickly and easily.", "title": "" }, { "docid": "8e302428a1fd6f7331f5546c22bf4d73", "text": "Automatic extraction of synonyms and/or semantically related words has various applications in Natural Language Processing (NLP). There are currently two mainstream extraction paradigms, namely, lexicon-based and distributional approaches. The former usually suffers from low coverage, while the latter is only able to capture general relatedness rather than strict synonymy. In this paper, two rule-based extraction methods are applied to definitions from a machine-readable dictionary. Extracted synonyms are evaluated in two experiments by solving TOEFL synonym questions and being compared against existing thesauri. The proposed approaches have achieved satisfactory results in both evaluations, comparable to published studies or even the state of the art.", "title": "" }, { "docid": "597f097d5206fc259224b905d4d20e20", "text": "W e present here a QT database designed j b r evaluation of algorithms that detect waveform boundaries in the EGG. T h e dataabase consists of 105 fifteen-minute excerpts of two-channel ECG Holter recordings, chosen to include a broad variety of QRS and ST-T morphologies. Waveform bounda,ries for a subset of beats in, these recordings have been manually determined by expert annotators using a n interactive graphic disp1a.y to view both signals simultaneously and to insert the annotations. Examples of each m,orvhologg were inchded in this subset of uniaotated beats; at least 30 beats in each record, 3622 beats in all, were manually a:anotated in Ihe database. In 11 records, two indepen,dent sets of ennotations have been inchded, to a.llow inter-observer variability slwdies. T h e Q T Databnse is available on a CD-ROM in the format previously used for the MIT-BJH Arrhythmia Database ayad the Euro-pean ST-T Database, from which some of the recordings in the &T Database have been obtained.", "title": "" }, { "docid": "81814a3ac70e4a2317596185e78e76ef", "text": "Cardiac complications are common after non-cardiac surgery. Peri-operative myocardial infarction occurs in 3% of patients undergoing major surgery. Recently, however, our understanding of the epidemiology of these cardiac events has broadened to include myocardial injury after non-cardiac surgery, diagnosed by an asymptomatic troponin rise, which also carries a poor prognosis. We review the causation of myocardial injury after non-cardiac surgery, with potential for prevention and treatment, based on currently available international guidelines and landmark studies. Postoperative arrhythmias are also a frequent cause of morbidity, with atrial fibrillation and QT-prolongation having specific relevance to the peri-operative period. Postoperative systolic heart failure is rare outside of myocardial infarction or cardiac surgery, but the impact of pre-operative diastolic dysfunction and its ability to cause postoperative heart failure is increasingly recognised. The latest evidence regarding diastolic dysfunction and the impact on non-cardiac surgery are examined to help guide fluid management for the non-cardiac anaesthetist.", "title": "" }, { "docid": "95395c693b4cdfad722ae0c3545f45ef", "text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.", "title": "" }, { "docid": "e881c2ab6abc91aa8e7cbe54d861d36d", "text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.", "title": "" }, { "docid": "0afbce731c55b9a3d3ced22ad59aa0ef", "text": "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semisupervised learning, and adapt the translated model to better fit the data distribution of the target language.", "title": "" } ]
scidocsrr
13417db90bdc029bcd23aa456926d2ad
Secret Intelligence Service Room No . 15 Acting like a Tough Guy : Violent-Sexist Video Games , Identification with Game Characters , Masculine Beliefs , and Empathy for Female Violence Victims
[ { "docid": "0a8763b4ba53cf488692d1c7c6791ab4", "text": "a r t i c l e i n f o To address the longitudinal relation between adolescents' habitual usage of media violence and aggressive behavior and empathy, N = 1237 seventh and eighth grade high school students in Germany completed measures of violent and nonviolent media usage, aggression, and empathy twice in twelve months. Cross-lagged panel analyses showed significant pathways from T1 media violence usage to higher physical aggression and lower empathy at T2. The reverse paths from T1 aggression or empathy to T2 media violence usage were nonsignificant. The links were similar for boys and girls. No links were found between exposure to nonviolent media and aggression or between violent media and relational aggression. T1 physical aggression moderated the impact of media violence usage, with stronger effects of media violence usage among the low aggression group. Introduction Despite the rapidly growing body of research addressing the potentially harmful effects of exposure to violent media, the evidence currently available is still limited in several ways. First, there is a shortage of longitudinal research examining the associations of media violence usage and aggression over time. Such evidence is crucial for examining hypotheses about the causal directions of observed co-variations of media violence usage and aggression that cannot be established on the basis of cross-sectional research. Second, most of the available longitudinal evidence has focused on aggression as the critical outcome variable, giving comparatively little attention to other potentially harmful effects, such as a decrease in empathy with others in distress. Third, the vast majority of studies available to date were conducted in North America. However, even in the age of globalization, patterns of media violence usage and their cultural contexts may vary considerably, calling for a wider database from different countries to examine the generalizability of results to address each of these aspects. It presents findings from a longitudinal study with a large sample of early adolescents in Germany, relating habitual usage of violence in movies, TV series, and interactive video games to self-reports of physical aggression and empathy over a period of twelve months. The study focused on early adolescence as a developmental period characterized by a confluence of risk factors as a result of biological, psychological, and social changes for a range of adverse outcomes. Regular media violence usage may significantly contribute to the overall risk of aggression as one such negative outcome. Media consumption increases from childhood …", "title": "" } ]
[ { "docid": "8780b620d228498447c4f1a939fa5486", "text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.", "title": "" }, { "docid": "92625cb17367de65a912cb59ea767a19", "text": "With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper. Keywords—image encryption, image cryptosystem, security, transmission.", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "dd4750b43931b3b09a5e95eaa74455d1", "text": "In viticulture, there are several applications where bud detection in vineyard images is a necessary task, susceptible of being automated through the use of computer vision methods. A common and effective family of visual detection algorithms are the scanning-window type, that slide a (usually) fixed size window along the original image, classifying each resulting windowed-patch as containing or not containing the target object. The simplicity of these algorithms finds its most challenging aspect in the classification stage. Interested in grapevine buds detection in natural field conditions, this paper presents a classification method for images of grapevine buds ranging 100 to 1600 pixels in diameter, captured in outdoor, under natural field conditions, in winter (i.e., no grape bunches, very few leaves, and dormant buds), without artificial background, and with minimum equipment requirements. The proposed method uses well-known computer vision technologies: Scale-Invariant Feature Transform for calculating low-level features, Bag of Features for building an image descriptor, and Support Vector Machines for training a classifier. When evaluated over images containing buds of at least 100 pixels in diameter, the approach achieves a recall higher than 0.9 and a precision of 0.86 over all windowed-patches covering the whole bud and down to 60% of it, and scaled up to window patches containing a proportion of 20%-80% of bud versus background pixels. This robustness on the position and size of the window demonstrates its viability for use as the classification stage in a scanning-window detection algorithms.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" }, { "docid": "f3459ff684d6309ac773c20e03f86183", "text": "We propose an algorithm to separate simultaneously speaking persons from each other, the “cocktail party problem”, using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and decorrelating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.", "title": "" }, { "docid": "8741e414199ecfbbf4a4c16d8a303ab5", "text": "In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .", "title": "" }, { "docid": "d6602316a4b1062c177b719fc4985084", "text": "Agricultural residues, such as lignocellulosic materials (LM), are the most attractive renewable bioenergy sources and are abundantly found in nature. Anaerobic digestion has been extensively studied for the effective utilization of LM for biogas production. Experimental investigation of physiochemical changes that occur during pretreatment is needed for developing mechanistic and effective models that can be employed for the rational design of pretreatment processes. Various-cutting edge pretreatment technologies (physical, chemical and biological) are being tested on the pilot scale. These different pretreatment methods are widely described in this paper, among them, microaerobic pretreatment (MP) has gained attention as a potential pretreatment method for the degradation of LM, which just requires a limited amount of oxygen (or air) supplied directly during the pretreatment step. MP involves microbial communities under mild conditions (temperature and pressure), uses fewer enzymes and less energy for methane production, and is probably the most promising and environmentally friendly technique in the long run. Moreover, it is technically and economically feasible to use microorganisms instead of expensive chemicals, biological enzymes or mechanical equipment. The information provided in this paper, will endow readers with the background knowledge necessary for finding a promising solution to methane production.", "title": "" }, { "docid": "cc6cf6557a8be12d8d3a4550163ac0a9", "text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.", "title": "" }, { "docid": "c487d81718a194dc008c3f652a2f9b14", "text": "In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.", "title": "" }, { "docid": "f80f1952c5b58185b261d53ba9830c47", "text": "This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.", "title": "" }, { "docid": "781ebbf85a510cfd46f0c824aa4aba7e", "text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.", "title": "" }, { "docid": "9e0cbbe8d95298313fd929a7eb2bfea9", "text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.", "title": "" }, { "docid": "4a1a1b3012f2ce941cc532a55b49f09b", "text": "Gamification informally refers to making a system more game-like. More specifically, gamification denotes applying game mechanics to a non-game system. We theorize that gamification success depends on the game mechanics employed and their effects on user motivation and immersion. The proposed theory may be tested using an experiment or questionnaire study.", "title": "" }, { "docid": "a91d8e09082836bca10b003ef5f7ceff", "text": "Mininet is network emulation software that allows launching a virtual network with switches, hosts and an SDN controller all with a single command on a single Linux kernel. It is a great way to start learning about SDN and Open-Flow as well as test SDN controller and SDN applications. Mininet can be used to deploy large networks on a single computer or virtual machine provided with limited resources. It is freely available open source software that emulates Open-Flow device and SDN controllers. Keywords— SDN, Mininet, Open-Flow, Python, Wireshark", "title": "" }, { "docid": "853b5ab3ed6a9a07c8d11ad32d0e58ad", "text": "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series modelshidden Markov models and linear dynamical systemsand is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.", "title": "" }, { "docid": "24bb26da0ce658ff075fc89b73cad5af", "text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.", "title": "" }, { "docid": "cc3c8ac3c1f0c6ffae41e70a88dc929d", "text": "Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into pools and share rewards amongst all pool members via pooled mining protocol. Pooled mining helps reduce the variance of miners’ payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than 95% of mining power in Bitcoin emanates from 10 mining pools. Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners’ rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption. In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SMARTPOOL shows how one can leverage smart contracts, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SMARTPOOL guarantees high security, low reward’s variance for miners and is cost-efficient. We implemented a prototype of SMARTPOOL as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SMARTPOOL is efficient and ready for practical use.", "title": "" }, { "docid": "fc3c4f6c413719bbcf7d13add8c3d214", "text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.", "title": "" }, { "docid": "d41703226184b92ef3f7feb501aa4c9b", "text": "The first RADAR patent was applied for by Christian Huelsmeyer on April 30, 1904 at the patent office in Berlin, Germany. He was motivated by a ship accident on the river Weser and called his experimental system ”Telemobiloscope”. In this chapter some important and modern topics in radar system design and radar signal processing will be discussed. Waveform design is one innovative topic where new results are available for special applications like automotive radar. Detection theory is a fundamental radar topic which will be discussed in this chapter for new range CFAR schemes which are essential for all radar systems. Target recognition has for many years been the dream of all radar engineers. New results for target classification will be discussed for some automotive radar sensors.", "title": "" } ]
scidocsrr
e61322adaf96eaa05e3ccd3121049e27
Fitness Gamification : Concepts , Characteristics , and Applications
[ { "docid": "0c7afb3bee6dd12e4a69632fbdb50ce8", "text": "OBJECTIVES\nTo systematically review levels of metabolic expenditure and changes in activity patterns associated with active video game (AVG) play in children and to provide directions for future research efforts.\n\n\nDATA SOURCES\nA review of the English-language literature (January 1, 1998, to January 1, 2010) via ISI Web of Knowledge, PubMed, and Scholars Portal using the following keywords: video game, exergame, physical activity, fitness, exercise, energy metabolism, energy expenditure, heart rate, disability, injury, musculoskeletal, enjoyment, adherence, and motivation.\n\n\nSTUDY SELECTION\nOnly studies involving youth (< or = 21 years) and reporting measures of energy expenditure, activity patterns, physiological risks and benefits, and enjoyment and motivation associated with mainstream AVGs were included. Eighteen studies met the inclusion criteria. Articles were reviewed and data were extracted and synthesized by 2 independent reviewers. MAIN OUTCOME EXPOSURES: Energy expenditure during AVG play compared with rest (12 studies) and activity associated with AVG exposure (6 studies).\n\n\nMAIN OUTCOME MEASURES\nPercentage increase in energy expenditure and heart rate (from rest).\n\n\nRESULTS\nActivity levels during AVG play were highly variable, with mean (SD) percentage increases of 222% (100%) in energy expenditure and 64% (20%) in heart rate. Energy expenditure was significantly lower for games played primarily through upper body movements compared with those that engaged the lower body (difference, -148%; 95% confidence interval, -231% to -66%; P = .001).\n\n\nCONCLUSIONS\nThe AVGs enable light to moderate physical activity. Limited evidence is available to draw conclusions on the long-term efficacy of AVGs for physical activity promotion.", "title": "" }, { "docid": "5e7a06213a32e0265dcb8bc11a5bb3f1", "text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.", "title": "" } ]
[ { "docid": "eb8321467458401aa86398390c32ae00", "text": "As the wide popularization of online social networks, online users are not content only with keeping online friendship with social friends in real life any more. They hope the system designers can help them exploring new friends with common interest. However, the large amount of online users and their diverse and dynamic interests possess great challenges to support such a novel feature in online social networks. In this paper, by leveraging interest-based features, we design a general friend recommendation framework, which can characterize user interest in two dimensions: context (location, time) and content, as well as combining domain knowledge to improve recommending quality. We also design a potential friend recommender system in a real online social network of biology field to show the effectiveness of our proposed framework.", "title": "" }, { "docid": "ac222a5f8784d7a5563939077c61deaa", "text": "Cyber-Physical Systems (CPS) are integrations of computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. In the physical world, the passage of time is inexorable and concurrency is intrinsic. Neither of these properties is present in today’s computing and networking abstractions. I argue that the mismatch between these abstractions and properties of physical processes impede technical progress, and I identify promising technologies for research and investment. There are technical approaches that partially bridge the abstraction gap today (such as real-time operating systems, middleware technologies, specialized embedded processor architectures, and specialized networks), and there is certainly considerable room for improvement of these technologies. However, it may be that we need a less incremental approach, where new abstractions are built from the ground up. The foundations of computing are built on the premise that the principal task of computers is transformation of data. Yet we know that the technology is capable of far richer interactions the physical world. I critically examine the foundations that have been built over the last several decades, and determine where the technology and theory bottlenecks and opportunities lie. I argue for a new systems science that is jointly physical and computational.", "title": "" }, { "docid": "4d9f0cf629cd3695a2ec249b81336d28", "text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.", "title": "" }, { "docid": "4ee5931bf57096913f7e13e5da0fbe7e", "text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.", "title": "" }, { "docid": "8a224bf0376321caa30a95318ec9ecf9", "text": "With the rapid development of very large scale integration (VLSI) and continuous scaling in the metal oxide semiconductor field effect transistor (MOSFET), pad corrosion in the aluminum (Al) pad surface has become practical concern in the semiconductor industry. This paper presents a new method to improve the pad corrosion on Al pad surface by using new Al/Ti/TiN film stack. The effects of different Al film stacks on the Al pad corrosion have been investigated. The experiment results show that the Al/Ti/TiN film stack could improve bond pad corrosion effectively comparing to Al/SiON film stack. Wafers processed with new Al film stack were stored up to 28 days and display no pad crystal (PDCY) defects on bond pad surfaces.", "title": "" }, { "docid": "f073abd94a9c5853e561439de35ac9bd", "text": "Evolutionary learning is one of the most popular techniques for designing quantitative investment (QI) products. Trend following (TF) strategies, owing to their briefness and efficiency, are widely accepted by investors. Surprisingly, to the best of our knowledge, no related research has investigated TF investment strategies within an evolutionary learning model. This paper proposes a hybrid long-term and short-term evolutionary trend following algorithm (eTrend) that combines TF investment strategies with the eXtended Classifier Systems (XCS). The proposed eTrend algorithm has two advantages: (1) the combination of stock investment strategies (i.e., TF) and evolutionary learning (i.e., XCS) can significantly improve computation effectiveness and model practicability, and (2) XCS can automatically adapt to market directions and uncover reasonable and understandable trading rules for further analysis, which can help avoid the irrational trading behaviors of common investors. To evaluate eTrend, experiments are carried out using the daily trading data stream of three famous indexes in the Shanghai Stock Exchange. Experimental results indicate that eTrend outperforms the buy-and-hold strategy with high Sortino ratio after the transaction cost. Its performance is also superior to the decision tree and artificial neural network trading models. Furthermore, as the concept drift phenomenon is common in the stock market, an exploratory concept drift analysis is conducted on the trading rules discovered in bear and bull market phases. The analysis revealed interesting and rational results. In conclusion, this paper presents convincing evidence that the proposed hybrid trend following model can indeed generate effective trading guid-", "title": "" }, { "docid": "0e068a4e7388ed456de4239326eb9b08", "text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.", "title": "" }, { "docid": "52d3d3bf1f29e254cbb89c64f3b0d6b5", "text": "Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.", "title": "" }, { "docid": "748eae887bcda0695cbcf1ba1141dd79", "text": "A wideband bandpass filter (BPF) with reconfigurable bandwidth (BW) is proposed based on a parallel-coupled line structure and a cross-shaped resonator with open stubs. The p-i-n diodes are used as the tuning elements, which can implement three reconfigurable BW states. The prototype of the designed filter reports an absolute BW tuning range of 1.22 GHz, while the fractional BW is varied from 34.8% to 56.5% when centered at 5.7 GHz. The simulation and measured results are in good agreement. Comparing with previous works, the proposed reconfigurable BPF features wider BW tuning range with maximum number of tuning states.", "title": "" }, { "docid": "393711bcd1a8666210e125fb4295e158", "text": "The purpose of a Beyond 4G (B4G) radio access technology, is to cope with the expected exponential increase of mobile data traffic in local area (LA). The requirements related to physical layer control signaling latencies and to hybrid ARQ (HARQ) round trip time (RTT) are in the order of ~1ms. In this paper, we propose a flexible orthogonal frequency division multiplexing (OFDM) based time division duplex (TDD) physical subframe structure optimized for B4G LA environment. We show that the proposed optimizations allow very frequent link direction switching, thus reaching the tight B4G HARQ RTT requirement and significant control signaling latency reductions compared to existing LTE-Advanced and WiMAX technologies.", "title": "" }, { "docid": "310f13dac8d7cf2d1b40878ef6ce051b", "text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.", "title": "" }, { "docid": "ea05a43abee762d4b484b5027e02a03a", "text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.", "title": "" }, { "docid": "55fc836c8b0f10486aa6d969d0cae14d", "text": "In this manuscript we explore the ways in which the marketplace metaphor resonates with online dating participants and how this conceptual framework influences how they assess themselves, assess others, and make decisions about whom to pursue. Taking a metaphor approach enables us to highlight the ways in which participants’ language shapes their self-concept and interactions with potential partners. Qualitative analysis of in-depth interviews with 34 participants from a large online dating site revealed that the marketplace metaphor was salient for participants, who employed several strategies that reflected the assumptions underlying the marketplace perspective (including resisting the metaphor). We explore the implications of this metaphor for romantic relationship development, such as the objectification of potential partners. Journal of Social and Personal Relationships © The Author(s), 2010. Reprints and permissions: sagepub.co.uk/journalsPermissions.nav, Vol. 27(4): 427–447. DOI: 10.1177/0265407510361614 This research was funded by Affirmative Action Grant 111579 from the Office of Research and Sponsored Programs at California State University, Stanislaus. An earlier version of this paper was presented at the International Communication Association, 2005. We would like to thank Jack Bratich, Art Ramirez, Lamar Reinsch, Jeanine Turner, and three anonymous reviewers for their helpful comments. All correspondence concerning this article should be addressed to Rebecca D. Heino, Georgetown University, McDonough School of Business, Washington D.C. 20057, USA [e-mail: rdh26@georgetown.edu]. Larry Erbert was the Action Editor on this article. at MICHIGAN STATE UNIV LIBRARIES on June 9, 2010 http://spr.sagepub.com Downloaded from", "title": "" }, { "docid": "2804384964bc8996e6574bdf67ed9cb5", "text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.", "title": "" }, { "docid": "5c38ad54e43b71ea5588418620bcf086", "text": "Chondrosarcomas are indolent but invasive chondroid malignancies that can form in the skull base. Standard management of chondrosarcoma involves surgical resection and adjuvant radiation therapy. This review evaluates evidence from the literature to assess the importance of the surgical approach and extent of resection on outcomes for patients with skull base chondrosarcoma. Also evaluated is the ability of the multiple modalities of radiation therapy, such as conventional fractionated radiotherapy, proton beam, and stereotactic radiosurgery, to control tumor growth. Finally, emerging therapies for the treatment of skull-base chondrosarcoma are discussed.", "title": "" }, { "docid": "a86bc0970dba249e1e53f9edbad3de43", "text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.", "title": "" }, { "docid": "b5a9bbf52279ce7826434b7e5d3ccbb6", "text": "We present our 11-layers deep, double-pathway, 3D Convolutional Neural Network, developed for the segmentation of brain lesions. The developed system segments pathology voxel-wise after processing a corresponding multi-modal 3D patch at multiple scales. We demonstrate that it is possible to train such a deep and wide 3D CNN on a small dataset of 28 cases. Our network yields promising results on the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% after postprocessing) on the ISLES 2015 training dataset, ranking among the top entries. Regardless its size, our network is capable of processing a 3D brain volume in 3 minutes, making it applicable to the automated analysis of larger study cohorts.", "title": "" }, { "docid": "653b44b98c78bed426c0e5630145c2ba", "text": "In the field of non-monotonic logics, the notion of rational closure is acknowledged as a landmark, and we are going to see that such a construction can be characterised by means of a simple method in the context of propositional logic. We then propose an application of our approach to rational closure in the field of Description Logics, an important knowledge representation formalism, and provide a simple decision procedure for this case.", "title": "" }, { "docid": "daa7773486701deab7b0c69e1205a1d9", "text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.", "title": "" }, { "docid": "c9766e95df62d747f5640b3cab412a3f", "text": "For the last 10 years, interest has grown in low frequency shear waves that propagate in the human body. However, the generation of shear waves by acoustic vibrators is a relatively complex problem, and the directivity patterns of shear waves produced by the usual vibrators are more complicated than those obtained for longitudinal ultrasonic transducers. To extract shear modulus parameters from the shear wave propagation in soft tissues, it is important to understand and to optimize the directivity pattern of shear wave vibrators. This paper is devoted to a careful study of the theoretical and the experimental directivity pattern produced by a point source in soft tissues. Both theoretical and experimental measurements show that the directivity pattern of a point source vibrator presents two very strong lobes for an angle around 35/spl deg/. This paper also points out the impact of the near field in the problem of shear wave generation.", "title": "" } ]
scidocsrr
e34fcf16ae45b3687a3d7a89d36306e4
WHICH TYPE OF MOTIVATION IS CAPABLE OF DRIVING ACHIEVEMENT BEHAVIORS SUCH AS EXERCISE IN DIFFERENT PERSONALITIES? BY RAJA AMJOD
[ { "docid": "aa223de93696eec79feb627f899f8e8d", "text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.", "title": "" } ]
[ { "docid": "88492d59d0610e69a4c6b42e40689f35", "text": "In this paper, we describe our participation at the subtask of extraction of relationships between two identified keyphrases. This task can be very helpful in improving search engines for scientific articles. Our approach is based on the use of a convolutional neural network (CNN) trained on the training dataset. This deep learning model has already achieved successful results for the extraction relationships between named entities. Thus, our hypothesis is that this model can be also applied to extract relations between keyphrases. The official results of the task show that our architecture obtained an F1-score of 0.38% for Keyphrases Relation Classification. This performance is lower than the expected due to the generic preprocessing phase and the basic configuration of the CNN model, more complex architectures are proposed as future work to increase the classification rate.", "title": "" }, { "docid": "73577e88b085e9e187328ce36116b761", "text": "We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.", "title": "" }, { "docid": "6ddbdf3b7f8b2bc13d2c1babcabbadc6", "text": "Improved sensors in the automotive field are leading to multi-object tracking of extended objects becoming more and more important for advanced driver assistance systems and highly automated driving. This paper proposes an approach that combines a PHD filter for extended objects, viz. objects that originate multiple measurements while also estimating the shape of the objects via constructing an object-local occupancy grid map and then extracting a polygonal chain. This allows tracking even in traffic scenarios where unambiguous segmentation of measurements is difficult or impossible. In this work, this is achieved using multiple segmentation assumptions by applying different parameter sets for the DBSCAN clustering algorithm. The proposed algorithm is evaluated using simulated data and real sensor data from a test track including highly accurate D-GPS and IMU data as a ground truth.", "title": "" }, { "docid": "08dfd4bb173f7d70cff710590b988f08", "text": "Gallium-67 citrate is currently considered as the tracer of first choice in the diagnostic workup of fever of unknown origin (FUO). Fluorine-18 2'-deoxy-2-fluoro-D-glucose (FDG) has been shown to accumulate in malignant tumours but also in inflammatory processes. The aim of this study was to prospectively evaluate FDG imaging with a double-head coincidence camera (DHCC) in patients with FUO in comparison with planar and single-photon emission tomography (SPET) 67Ga citrate scanning. Twenty FUO patients underwent FDG imaging with a DHCC which included transaxial and longitudinal whole-body tomography. In 18 of these subjects, 67Ga citrate whole-body and SPET imaging was performed. The 67Ga citrate and FDG images were interpreted by two investigators, both blinded to the results of other diagnostic modalities. Forty percent (8/20) of the patients had infection, 25% (5/20) had auto-immune diseases, 10% (2/20) had neoplasms and 15% (3/20) had other diseases. Fever remained unexplained in 10% (2/20) of the patients. Of the 20 patients studied, FDG imaging was positive and essentially contributed to the final diagnosis in 11 (55%). The sensitivity of transaxial FDG tomography in detecting the focus of fever was 84% and the specificity, 86%. Positive and negative predictive values were 92% and 75%, respectively. If the analysis was restricted to the 18 patients who were investigated both with 67Ga citrate and FDG, sensitivity was 81% and specificity, 86%. Positive and negative predictive values were 90% and 75%, respectively. The diagnostic accuracy of whole-body FDG tomography (again restricted to the aforementioned 18 patients) was lower (sensitivity, 36%; specificity, 86%; positive and negative predictive values, 80% and 46%, respectively). 67Ga citrate SPET yielded a sensitivity of 67% in detecting the focus of fever and a specificity of 78%. Positive and negative predictive values were 75% and 70%, respectively. A low sensitivity (45%), but combined with a high specificity (100%), was found in planar 67Ga imaging. Positive and negative predictive values were 100% and 54%, respectively. It is concluded that in the context of FUO, transaxial FDG tomography performed with a DHCC is superior to 67Ga citrate SPET. This seems to be the consequence of superior tracer kinetics of FDG compared with those of 67Ga citrate and of a better spatial resolution of a DHCC system compared with SPET imaging. In patients with FUO, FDG imaging with either dedicated PET or DHCC should be considered the procedure of choice.", "title": "" }, { "docid": "8b0a90d4f31caffb997aced79c59e50c", "text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the", "title": "" }, { "docid": "dde695574d7007f6f6c5fc06b2d4468a", "text": "A model of positive psychological functioning that emerges from diverse domains of theory and philosophy is presented. Six key dimensions of wellness are defined, and empirical research summarizing their empirical translation and sociodemographic correlates is presented. Variations in well-being are explored via studies of discrete life events and enduring human experiences. Life histories of the psychologically vulnerable and resilient, defined via the cross-classification of depression and well-being, are summarized. Implications of the focus on positive functioning for research on psychotherapy, quality of life, and mind/body linkages are reviewed.", "title": "" }, { "docid": "6087ad77caa9947591eb9a3f8b9b342d", "text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.", "title": "" }, { "docid": "d4c1976f8796122eea98f0c3b7577a6b", "text": "Results from a new experiment in the Philippines shed light on the effects of voter information on vote buying and incumbent advantage. The treatment provided voters with information about the existence of a major spending program and the proposed allocations and promises of mayoral candidates just prior municipal elections. It left voters more knowledgeable about candidates’ proposed policies and increased the salience of spending. Treated voters were more likely to be targeted for vote buying. We develop a model of vote buying that accounts for these results. The information we provided attenuated incumbent advantage, prompting incumbents to increase their vote buying in response. Consistent with this explanation, both knowledge and vote buying impacts were higher in incumbent-dominated municipalities. Our findings show that, in a political environment where vote buying is the currency of electoral mobilization, incumbent efforts to increase voter welfare may take the form of greater vote buying. ∗This project would not have been possible without the support and cooperation of PPCRV volunteers in Ilocos Norte and Ilocos Sur. We are grateful to Michael Davidson for excellent research assistance and to Prudenciano Gordoncillo and the UPLB team for collecting the data. We thank Marcel Fafchamps, Clement Imbert, Pablo Querubin, Simon Quinn and two anonymous reviewers for constructive comments on the pre-analysis plan. Pablo Querubin graciously shared his precinct-level data from the 2010 elections with us. We thank conference and seminar participants at Gothenburg, Copenhagen, and Oxford for comments. The project received funding from the World Bank and ethics approval from the University of Oxford Economics Department (Econ DREC Ref. No. 1213/0014). All remaining errors are ours. The opinions and conclusions expressed here are those of the authors and not those of the World Bank or the Inter-American Development Bank. †University of British Columbia; email: cesi.cruz@ubc.ca ‡Inter-American Development Bank; email: pkeefer@iadb.org §Oxford University; email: julien.labonne@bsg.ox.ac.uk", "title": "" }, { "docid": "494b375064fbbe012b382d0ad2db2900", "text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?", "title": "" }, { "docid": "e15405f1c0fb52be154e79a2976fbb6d", "text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.", "title": "" }, { "docid": "394f71d22294ec8f6704ad484a825b20", "text": "Despite decades of research, the roles of climate and humans in driving the dramatic extinctions of large-bodied mammals during the Late Quaternary remain contentious. We use ancient DNA, species distribution models and the human fossil record to elucidate how climate and humans shaped the demographic history of woolly rhinoceros, woolly mammoth, wild horse, reindeer, bison and musk ox. We show that climate has been a major driver of population change over the past 50,000 years. However, each species responds differently to the effects of climatic shifts, habitat redistribution and human encroachment. Although climate change alone can explain the extinction of some species, such as Eurasian musk ox and woolly rhinoceros, a combination of climatic and anthropogenic effects appears to be responsible for the extinction of others, including Eurasian steppe bison and wild horse. We find no genetic signature or any distinctive range dynamics distinguishing extinct from surviving species, underscoring the challenges associated with predicting future responses of extant mammals to climate and human-mediated habitat change. Toward the end of the Late Quaternary, beginning c. 50,000 years ago, Eurasia and North America lost c. 36% and 72% of their large-bodied mammalian genera (megafauna), respectively1. The debate surrounding the potential causes of these extinctions has focused primarily on the relative roles of climate and humans2,3,4,5. In general, the proportion of species that went extinct was greatest on continents that experienced the most dramatic Correspondence and requests for materials should be addressed to E.W (ewillerslev@snm.ku.dk). *Joint first authors †Deceased Supplementary Information is linked to the online version of the paper at www.nature.com/nature. Author contributions E.W. initially conceived and headed the overall project. C.R. headed the species distribution modelling and range measurements. E.D.L. and J.T.S. extracted, amplified and sequenced the reindeer DNA sequences. J.B. extracted, amplified and sequenced the woolly rhinoceros DNA sequences; M.H. generated part of the woolly rhinoceros data. J.W., K-P.K., J.L. and R.K.W. generated the horse DNA sequences; A.C. generated part of the horse data. L.O., E.D.L. and B.S. analysed the genetic data, with input from R.N., K.M., M.A.S. and S.Y.W.H. Palaeoclimate simulations were provided by P.B., A.M.H, J.S.S. and P.J.V. The directly-dated spatial LAT/LON megafauna locality information was collected by E.D.L., K.A.M., D.N.-B., D.B. and A.U.; K.A.M. and D.N-B performed the species distribution modelling and range measurements. M.B. carried out the gene-climate correlation. A.U. and D.B. assembled the human Upper Palaeolithic sites from Eurasia. T.G. and K.E.G. assembled the archaeofaunal assemblages from Siberia. A.U. analysed the spatial overlap of humans and megafauna and the archaeofaunal assemblages. E.D.L., L.O., B.S., K.A.M., D.N.-B., M.K.B., A.U., T.G. and K.E.G. wrote the Supplementary Information. D.F., G.Z., T.W.S., K.A-S., G.B., J.A.B., D.L.J., P.K., T.K., X.L., L.D.M., H.G.M., D.M., M.M., E.S., M.S., R.S.S., T.S., E.S., A.T., R.W., A.C. provided the megafauna samples used for ancient DNA analysis. E.D.L. made the figures. E.D.L, L.O. and E.W. wrote the majority of the manuscript, with critical input from B.S., M.H., K.A.M., M.T.P.G., C.R., R.K.W, A.U. and the remaining authors. Mitochondrial DNA sequences have been deposited in GenBank under accession numbers JN570760-JN571033. Reprints and permissions information is available at www.nature.com/reprints. NIH Public Access Author Manuscript Nature. Author manuscript; available in PMC 2014 June 25. Published in final edited form as: Nature. ; 479(7373): 359–364. doi:10.1038/nature10574. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript climatic changes6, implying a major role of climate in species loss. However, the continental pattern of megafaunal extinctions in North America approximately coincides with the first appearance of humans, suggesting a potential anthropogenic contribution to species extinctions3,5. Demographic trajectories of different taxa vary widely and depend on the geographic scale and methodological approaches used3,5,7. For example, genetic diversity in bison8,9, musk ox10 and European cave bear11 declines gradually from c. 50–30,000 calendar years ago (ka BP). In contrast, sudden losses of genetic diversity are observed in woolly mammoth12,13 and cave lion14 long before their extinction, followed by genetic stability until the extinction events. It remains unresolved whether the Late Quaternary extinctions were a cross-taxa response to widespread climatic or anthropogenic stressors, or were a species-specific response to one or both factors15,16. Additionally, it is unclear whether distinctive genetic signatures or geographic range-size dynamics characterise extinct or surviving species— questions of particular importance to the conservation of extant species. To disentangle the processes underlying population dynamics and extinction, we investigate the demographic histories of six megafauna herbivores of the Late Quaternary: woolly rhinoceros (Coelodonta antiquitatis), woolly mammoth (Mammuthus primigenius), horse (wild Equus ferus and living domestic Equus caballus), reindeer/caribou (Rangifer tarandus), bison (Bison priscus/Bison bison) and musk ox (Ovibos moschatus). These taxa were characteristic of Late Quaternary Eurasia and/or North America and represent both extinct and extant species. Our analyses are based on 846 radiocarbon-dated mitochondrial DNA (mtDNA) control region sequences, 1,439 directly-dated megafaunal remains, and 6,291 radiocarbon determinations associated with Upper Palaeolithic human occupations in Eurasia. We reconstruct the demographic histories of the megafauna herbivores from ancient DNA data, model past species distributions and determine the geographic overlap between humans and megafauna over the last 50,000 years. We use these data to investigate how climate change and anthropogenic impacts affected species dynamics at continental and global scales, and contributed to in the extinction of some species and the survival of others. Effects of climate change differ across species and continents The direct link between climate change, population size and species extinctions is difficult to document10. However, population size is likely controlled by the amount of available habitat and is indicated by the geographic range of a species17,18. We assessed the role of climate using species distribution models, dated megafauna fossil remains and palaeoclimatic data on temperature and precipitation. We estimated species range sizes at the time periods of 42, 30, 21 and 6 ka BP as a proxy for habitat availability (Fig. 1; Supplementary Information section S1). Range size dynamics were then compared to demographic histories inferred from ancient DNA using three distinct analyses (Supplementary Information section S3): (i) coalescent-based estimation of changes in effective population size through time (Bayesian skyride19), which allows detection of changes in global genetic diversity; (ii) serial coalescent simulation followed by Approximate Bayesian Computation, which selects among different models describing continental population dynamics; and (iii) isolation-by-distance analysis, which estimates Lorenzen et al. Page 2 Nature. Author manuscript; available in PMC 2014 June 25. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript potential population structure and connectivity within continents. If climate was a major factor driving species population sizes, we would expect expansion and contraction of a species’ geographic range to mirror population increase and decline, respectively. We find a positive correlation between changes in the size of available habitat and genetic diversity for the four species—horse, reindeer, bison and musk ox—for which we have range estimates spanning all four time-points (the correlation is not statistically significant for reindeer: p = 0.101) (Fig. 2; Supplementary Information section S4). Hence, species distribution modelling based on fossil distributions and climate data are congruent with estimates of effective population size based on ancient DNA data, even in species with very different life-history traits. We conclude that climate has been a major driving force in megafauna population changes over the past 50,000 years. It is noteworthy that both estimated modelled ranges and genetic data are derived from a subset of the entire fossil record (Supplementary Information sections S1 and S3). Thus, changes in effective population size and range size may change with the addition of more data, especially from outside the geographical regions covered by the present study. However, we expect that the reported positive correlation will prevail when congruent data are compared. The best-supported models of changes in effective population size in North America and Eurasia during periods of dramatic climatic change during the past 50,000 years are those in which populations increase in size (Fig. 3, Supplementary Information section S3). This is true for all taxa except bison. However, the timing is not synchronous across populations. Specifically, we find highest support for population increase beginning c. 34 ka BP in Eurasian horse, reindeer and musk ox (Fig. 3a). Eurasian mammoth and North American horse increase prior to the Last Glacial Maximum (LGM) c. 26 ka BP. Models of population increase in woolly rhinoceros and North American mammoth fit equally well before and after the LGM, and North American reindeer populations increase later still. Only North American bison shows a population decline (Fig. 3b), the intensity of which likely swamps the signal of global population increase starting at c. 35 ka BP identified in the skyride plot", "title": "" }, { "docid": "b96a571e57a3121746d841bed4af4dbe", "text": "The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.", "title": "" }, { "docid": "a981db3aa149caec10b1824c82840782", "text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.", "title": "" }, { "docid": "42fd940e239ed3748b007fde8b583b25", "text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.", "title": "" }, { "docid": "f0f6125a0d718789715c3760db18161e", "text": "Detecting fluid emissions (e.g. urination or leaks) that extend beyond containment systems (e.g. diapers or adult pads) is a cause of concern for users and developers of wearable fluid containment products. Immediate, automated detection would allow users to address the situation quickly, preventing medical conditions such as adverse skin effects and avoiding embarrassment. For product development, fluid emission detection systems would enable more accurate and efficient lab and field evaluation of absorbent products. This paper describes the development of a textile-based fluid-detection sensing method that uses a multi-layer \"keypad matrix\" sensing paradigm using stitched conductive threads. Bench characterization tests determined the effects of sensor spacing, spacer fabric property, and contact pressures on wetness detection for a 5mL minimum benchmark fluid volume. The sensing method and bench-determined requirements were then applied in a close-fitting torso garment for babies that fastens at the crotch (onesie) that is able to detect diaper leakage events. Mannequin testing of the resulting garment confirmed the ability of using wetness sensing timing to infer location of induced 5 mL leaks.", "title": "" }, { "docid": "fc5a04c795fbfdd2b6b53836c9710e4d", "text": "In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.", "title": "" }, { "docid": "046f6c5cc6065c1cb219095fb0dfc06f", "text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.", "title": "" }, { "docid": "575da85b3675ceaec26143981dbe9b53", "text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "58c2f9f5f043f87bc51d043f70565710", "text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.", "title": "" }, { "docid": "9b47d3883d85c0fc61b3b033bdc8aee9", "text": "Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.", "title": "" } ]
scidocsrr
f59f315f9c0279ab1456d3ae59527e07
Multiobjective Combinatorial Optimization by Using Decomposition and Ant Colony
[ { "docid": "3824a61e476fa359a104d03f7a99262c", "text": "We describe an artificial ant colony capable of solving the travelling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.", "title": "" } ]
[ { "docid": "cd1274c785a410f0e38b8e033555ee9b", "text": "This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.", "title": "" }, { "docid": "340a2fd43f494bb1eba58629802a738c", "text": "A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.", "title": "" }, { "docid": "13c6e4fc3a20528383ef7625c9dd2b79", "text": "Seasonal affective disorder (SAD) is a syndrome characterized by recurrent depressions that occur annually at the same time each year. We describe 29 patients with SAD; most of them had a bipolar affective disorder, especially bipolar II, and their depressions were generally characterized by hypersomnia, overeating, and carbohydrate craving and seemed to respond to changes in climate and latitude. Sleep recordings in nine depressed patients confirmed the presence of hypersomnia and showed increased sleep latency and reduced slow-wave (delta) sleep. Preliminary studies in 11 patients suggest that extending the photoperiod with bright artificial light has an antidepressant effect.", "title": "" }, { "docid": "a1ffd254e355cf312bf269ec3751200d", "text": "Existing RGB-D object recognition methods either use channel specific handcrafted features, or learn features with deep networks. The former lack representation ability while the latter require large amounts of training data and learning time. In real-time robotics applications involving RGB-D sensors, we do not have the luxury of both. In this paper, we propose Localized Deep Extreme Learning Machines (LDELM) that efficiently learn features from RGB-D data. By using localized patches, not only is the problem of data sparsity solved, but the learned features are robust to occlusions and viewpoint variations. LDELM learns deep localized features in an unsupervised way from random patches of the training data. Each image is then feed-forwarded, patch-wise, through the LDELM to form a cuboid of features. The cuboid is divided into cells and pooled to get the final compact image representation which is then used to train an ELM classifier. Experiments on the benchmark Washington RGB-D and 2D3D datasets show that the proposed algorithm not only is significantly faster to train but also outperforms state-of-the-art methods in terms of accuracy and classification time.", "title": "" }, { "docid": "251a47eb1a5307c5eba7372ce09ea641", "text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.", "title": "" }, { "docid": "0850f46a4bcbe1898a6a2dca9f61ea61", "text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.", "title": "" }, { "docid": "3f21c1bb9302d29bc2c816aaabf2e613", "text": "BACKGROUND\nPlasma brain natriuretic peptide (BNP) level increases in proportion to the degree of right ventricular dysfunction in pulmonary hypertension. We sought to assess the prognostic significance of plasma BNP in patients with primary pulmonary hypertension (PPH).\n\n\nMETHODS AND RESULTS\nPlasma BNP was measured in 60 patients with PPH at diagnostic catheterization, together with atrial natriuretic peptide, norepinephrine, and epinephrine. Measurements were repeated in 53 patients after a mean follow-up period of 3 months. Forty-nine of the patients received intravenous or oral prostacyclin. During a mean follow-up period of 24 months, 18 patients died of cardiopulmonary causes. According to multivariate analysis, baseline plasma BNP was an independent predictor of mortality. Patients with a supramedian level of baseline BNP (>/=150 pg/mL) had a significantly lower survival rate than those with an inframedian level, according to Kaplan-Meier survival curves (P<0.05). Plasma BNP in survivors decreased significantly during the follow-up (217+/-38 to 149+/-30 pg/mL, P<0. 05), whereas that in nonsurvivors increased (365+/-77 to 544+/-68 pg/mL, P<0.05). Thus, survival was strikingly worse for patients with a supramedian value of follow-up BNP (>/=180 pg/mL) than for those with an inframedian value (P<0.0001).\n\n\nCONCLUSIONS\nA high level of plasma BNP, and in particular, a further increase in plasma BNP during follow-up, may have a strong, independent association with increased mortality rates in patients with PPH.", "title": "" }, { "docid": "168a959b617dc58e6355c1b0ab46c3fc", "text": "Detection of true human emotions has attracted a lot of interest in the recent years. The applications range from e-retail to health-care for developing effective companion systems with reliable emotion recognition. This paper proposes heart rate variability (HRV) features extracted from photoplethysmogram (PPG) signal obtained from a cost-effective PPG device such as Pulse Oximeter for detecting and recognizing the emotions on the basis of the physiological signals. The HRV features obtained from both time and frequency domain are used as features for classification of emotions. These features are extracted from the entire PPG signal obtained during emotion elicitation and baseline neutral phase. For analyzing emotion recognition, using the proposed HRV features, standard video stimuli are used. We have considered three emotions namely, happy, sad and neutral or null emotions. Support vector machines are used for developing the models and features are explored to achieve average emotion recognition of 83.8% for the above model and listed features.", "title": "" }, { "docid": "f7ce012a5943be5137df7d414e9de75a", "text": "As multi-core processors proliferate, it has become more important than ever to ensure efficient execution of parallel jobs on multiprocessor systems. In this paper, we study the problem of scheduling parallel jobs with arbitrary release time on multiprocessors while minimizing the jobs’ mean response time. We focus on non-clairvoyant scheduling schemes that adaptively reallocate processors based on periodic feedbacks from the individual jobs. Since it is known that no deterministic non-clairvoyant algorithm is competitive for this problem,we focus on resource augmentation analysis, and show that two adaptive algorithms, Agdeq and Abgdeq, achieve competitive performance using O(1) times faster processors than the adversary. These results are obtained through a general framework for analyzing the mean response time of any two-level adaptive scheduler. Our simulation results verify the effectiveness of Agdeq and Abgdeq by evaluating their performances over a wide range of workloads consisting of synthetic parallel jobs with different parallelism characteristics.", "title": "" }, { "docid": "a7959808cb41963e8d204c3078106842", "text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.", "title": "" }, { "docid": "c13247847d60a5ebd19822140403a238", "text": "Parallelizing existing sequential programs to run efficiently on multicores is hard. The Java 5 package java.util.concurrent (j.u.c.) supports writing concurrent programs: much of the complexity of writing thread-safe and scalable programs is hidden in the library. To use this package, programmers still need to reengineer existing code. This is tedious because it requires changing many lines of code, is error-prone because programmers can use the wrong APIs, and is omission-prone because programmers can miss opportunities to use the enhanced APIs. This paper presents our tool, Concurrencer, that enables programmers to refactor sequential code into parallel code that uses three j.u.c. concurrent utilities. Concurrencer does not require any program annotations. Its transformations span multiple, non-adjacent, program statements. A find-and-replace tool can not perform such transformations, which require program analysis. Empirical evaluation shows that Concurrencer refactors code effectively: Concurrencer correctly identifies and applies transformations that some open-source developers overlooked, and the converted code exhibits good speedup.", "title": "" }, { "docid": "19d79b136a9af42ac610131217de8c08", "text": "The aim of the experimental study described in this article is to investigate the effect of a lifelike character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character’s affective response to the user’s performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress see front matter r 2004 Elsevier Ltd. All rights reserved. .ijhcs.2004.11.009 cle is a significantly revised and extended version of Prendinger et al. (2003). nding author. Tel.: +813 4212 2650; fax: +81 3 3556 1916. dresses: helmut@nii.ac.jp (H. Prendinger), jmori@miv.t.u-tokyo.ac.jp (J. Mori), v.t.u-tokyo.ac.jp (M. Ishizuka).", "title": "" }, { "docid": "4b3592efd8a4f6f6c9361a6f66a30a5f", "text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E", "title": "" }, { "docid": "73b4cceb1546a94260c75ae8bed8edd8", "text": "We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.", "title": "" }, { "docid": "0cc665089be9aa8217baac32f0385f41", "text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.", "title": "" }, { "docid": "d473619f76f81eced041df5bc012c246", "text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.", "title": "" }, { "docid": "f249a6089a789e52eeadc8ae16213bc1", "text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.", "title": "" }, { "docid": "9c52333616cf2b1dce267333f4fad2ba", "text": "We present a new type of actuatable display, called Tilt Displays, that provide visual feedback combined with multi-axis tilting and vertical actuation. Their ability to physically mutate provides users with an additional information channel that facilitates a range of new applications including collaboration and tangible entertainment while enhancing familiar applications such as terrain modelling by allowing 3D scenes to be rendered in a physical-3D manner. Through a mobile 3x3 custom built prototype, we examine the design space around Tilt Displays, categorise output modalities and conduct two user studies. The first, an exploratory study examines users' initial impressions of Tilt Displays and probes potential interactions and uses. The second takes a quantitative approach to understand interaction possibilities with such displays, resulting in the production of two user-defined gesture sets: one for manipulating the surface of the Tilt Display, the second for conducting everyday interactions.", "title": "" }, { "docid": "6a2e3c783b468474ca0f67d7c5af456c", "text": "We evaluated the cytotoxic effects of four prostaglandin analogs (PGAs) used to treat glaucoma. First we established primary cultures of conjunctival stromal cells from healthy donors. Then cell cultures were incubated with different concentrations (0, 0.1, 1, 5, 25, 50 and 100%) of commercial formulations of bimatoprost, tafluprost, travoprost and latanoprost for increasing periods (5 and 30 min, 1 h, 6 h and 24 h) and cell survival was assessed with three different methods: WST-1, MTT and calcein/AM-ethidium homodimer-1 assays. Our results showed that all PGAs were associated with a certain level of cell damage, which correlated significantly with the concentration of PGA used, and to a lesser extent with culture time. Tafluprost tended to be less toxic than bimatoprost, travoprost and latanoprost after all culture periods. The results for WST-1, MTT and calcein/AM-ethidium homodimer-1 correlated closely. When the average lethal dose 50 was calculated, we found that the most cytotoxic drug was latanoprost, whereas tafluprost was the most sparing of the ocular surface in vitro. These results indicate the need to design novel PGAs with high effectiveness but free from the cytotoxic effects that we found, or at least to obtain drugs that are functional at low dosages. The fact that the commercial formulation of tafluprost used in this work was preservative-free may support the current tendency to eliminate preservatives from eye drops for clinical use.", "title": "" }, { "docid": "afe2bd0d8c8ad5495eb4907bf7ffa28d", "text": "Shannnon entropy is an efficient tool to measure uncertain information. However, it cannot handle the more uncertain situation when the uncertainty is represented by basic probability assignment (BPA), instead of probability distribution, under the framework of Dempster Shafer evidence theory. To address this issue, a new entropy, named as Deng entropy, is proposed. The proposed Deng entropy is the generalization of Shannnon entropy. If uncertain information is represented by probability distribution, the uncertain degree measured by Deng entropy is the same as that of Shannnon’s entropy. Some numerical examples are illustrated to shown the efficiency of Deng entropy.", "title": "" } ]
scidocsrr
c916cb0706485d34dbd445027e7ab2c2
Heuristic Feature Selection for Clickbait Detection
[ { "docid": "40da1f85f7bdc84537a608ce6bec0e17", "text": "This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.", "title": "" }, { "docid": "3a7c0ab68349e502d3803e7dd77bd69d", "text": "Clickbait has become a nuisance on social media. To address the urging task of clickbait detection, we constructed a new corpus of 38,517 annotated Twitter tweets, the Webis Clickbait Corpus 2017. To avoid biases in terms of publisher and topic, tweets were sampled from the top 27 most retweeted news publishers, covering a period of 150 days. Each tweet has been annotated on 4-point scale by five annotators recruited at Amazon’s Mechanical Turk. The corpus has been employed to evaluate 12 clickbait detectors submitted to the Clickbait Challenge 2017. Download: https://webis.de/data/webis-clickbait-17.html Challenge: https://clickbait-challenge.org", "title": "" } ]
[ { "docid": "ba2e16103676fa57bc3ca841106d2d32", "text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.", "title": "" }, { "docid": "548d87ac6f8a023d9f65af371ad9314c", "text": "Mindfiilness meditation is an increasingly popular intervention for the treatment of physical illnesses and psychological difficulties. Using intervention strategies with mechanisms familiar to cognitive behavioral therapists, the principles and practice of mindfijlness meditation offer promise for promoting many of the most basic elements of positive psychology. It is proposed that mindfulness meditation promotes positive adjustment by strengthening metacognitive skills and by changing schemas related to emotion, health, and illness. Additionally, the benefits of yoga as a mindfulness practice are explored. Even though much empirical work is needed to determine the parameters of mindfulness meditation's benefits, and the mechanisms by which it may achieve these benefits, theory and data thus far clearly suggest the promise of mindfulness as a link between positive psychology and cognitive behavioral therapies.", "title": "" }, { "docid": "70c8caf1bdbdaf29072903e20c432854", "text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.", "title": "" }, { "docid": "6737955fd1876a40fc0e662a4cac0711", "text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.", "title": "" }, { "docid": "34b3c5ee3ea466c23f5c7662f5ce5b33", "text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.", "title": "" }, { "docid": "dc83550afd690e371283428647ed806e", "text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.", "title": "" }, { "docid": "43975c43de57d889b038cdee8b35e786", "text": "We present an algorithm for computing rigorous solutions to a large class of ordinary differential equations. The main algorithm is based on a partitioning process and the use of interval arithmetic with directed rounding. As an application, we prove that the Lorenz equations support a strange attractor, as conjectured by Edward Lorenz in 1963. This conjecture was recently listed by Steven Smale as one of several challenging problems for the twenty-first century. We also prove that the attractor is robust, i.e., it persists under small perturbations of the coefficients in the underlying differential equations. Furthermore, the flow of the equations admits a unique SRB measure, whose support coincides with the attractor. The proof is based on a combination of normal form theory and rigorous computations.", "title": "" }, { "docid": "d1b20385d90fe1e98a07f9cf55af6adb", "text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.", "title": "" }, { "docid": "0674479836883d572b05af6481f27a0d", "text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …", "title": "" }, { "docid": "176dc97bd2ce3c1fd7d3a8d6913cff70", "text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.", "title": "" }, { "docid": "e59b7782cefc46191d36ba7f59d2f2b8", "text": "Music is capable of evoking exceptionally strong emotions and of reliably affecting the mood of individuals. Functional neuroimaging and lesion studies show that music-evoked emotions can modulate activity in virtually all limbic and paralimbic brain structures. These structures are crucially involved in the initiation, generation, detection, maintenance, regulation and termination of emotions that have survival value for the individual and the species. Therefore, at least some music-evoked emotions involve the very core of evolutionarily adaptive neuroaffective mechanisms. Because dysfunctions in these structures are related to emotional disorders, a better understanding of music-evoked emotions and their neural correlates can lead to a more systematic and effective use of music in therapy.", "title": "" }, { "docid": "172ee4ea5615c415423b4224baa31d86", "text": "Many companies are deploying services largely based on machine-learning algorithms for sophisticated processing of large amounts of data, either for consumers or industry. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on-chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines, and evaluate performance by integrating electrical and optical inter-chip interconnects separately. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 656.63× over a GPU, and reduce the energy by 184.05× on average for a 64-chip system. We implement the node down to the place and route at 28 nm, containing a combination of custom storage and computational units, with electrical inter-chip interconnects.", "title": "" }, { "docid": "22e21aab5d41c84a26bc09f9b7402efa", "text": "Skeem for their thoughtful comments and suggestions.", "title": "" }, { "docid": "381c02fb1ce523ddbdfe3acdde20abf1", "text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.", "title": "" }, { "docid": "be311c7a047a18fbddbab120aa97a374", "text": "This paper presents a novel mechatronics master-slave setup for hand telerehabilitation. The system consists of a sensorized glove acting as a remote master and a powered hand exoskeleton acting as a slave. The proposed architecture presents three main innovative solutions. First, it provides the therapist with an intuitive interface (a sensorized wearable glove) for conducting the rehabilitation exercises. Second, the patient can benefit from a robot-aided physical rehabilitation in which the slave hand robotic exoskeleton can provide an effective treatment outside the clinical environment without the physical presence of the therapist. Third, the mechatronics setup is integrated with a sensorized object, which allows for the execution of manipulation exercises and the recording of patient's improvements. In this paper, we also present the results of the experimental characterization carried out to verify the system usability of the proposed architecture with healthy volunteers.", "title": "" }, { "docid": "20eaba97d10335134fa79835966643ba", "text": "Limited research has been done on exoskeletons to enable faster movements of the lower extremities. An exoskeleton’s mechanism can actually hinder agility by adding weight, inertia and friction to the legs; compensating inertia through control is particularly difficult due to instability issues. The added inertia will reduce the natural frequency of the legs, probably leading to lower step frequency during walking. We present a control method that produces an approximate compensation of an exoskeleton’s inertia. The aim is making the natural frequency of the exoskeleton-assisted leg larger than that of the unaided leg. The method uses admittance control to compensate the weight and friction of the exoskeleton. Inertia compensation is emulated by adding a feedback loop consisting of low-pass filtered acceleration multiplied by a negative gain. This gain simulates negative inertia in the low-frequency range. We tested the controller on a statically supported, single-DOF exoskeleton that assists swing movements of the leg. Subjects performed movement sequences, first unassisted and then using the exoskeleton, in the context of a computer-based task resembling a race. With zero inertia compensation, the steady-state frequency of leg swing was consistently reduced. Adding inertia compensation enabled subjects to recover their normal frequency of swing.", "title": "" }, { "docid": "49e148ddb4c5798c157e8568c10fae3d", "text": "Aesthetic quality estimation of an image is a challenging task. In this paper, we introduce a deep CNN approach to tackle this problem. We adopt the sate-of-the-art object-recognition CNN as our baseline model, and adapt it for handling several high-level attributes. The networks capable of dealing with these high-level concepts are then fused by a learned logical connector for predicting the aesthetic rating. Results on the standard benchmark shows the effectiveness of our approach.", "title": "" }, { "docid": "644d2fcc7f2514252c2b9da01bb1ef42", "text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1", "title": "" }, { "docid": "1aa89c7b8be417345d78d1657d5f487f", "text": "This paper proposes a new novel snubberless current-fed half-bridge front-end isolated dc/dc converter-based inverter for photovoltaic applications. It is suitable for grid-tied (utility interface) as well as off-grid (standalone) application based on the mode of control. The proposed converter attains clamping of the device voltage by secondary modulation, thus eliminating the need of snubber or active-clamp. Zero-current switching or natural commutation of primary devices and zero-voltage switching of secondary devices is achieved. Soft-switching is inherent owing to the proposed secondary modulation and is maintained during wide variation in voltage and power transfer capacity and thus is suitable for photovoltaic (PV) applications. Primary device voltage is clamped at reflected output voltage, and secondary device voltage is clamped at output voltage. Steady-state operation and analysis, and design procedure are presented. Simulation results using PSIM 9.0 are given to verify the proposed analysis and design. An experimental converter prototype rated at 200 W has been designed, built, and tested in the laboratory to verify and demonstrate the converter performance over wide variations in input voltage and output power for PV applications. The proposed converter is a true isolated boost converter and has higher voltage conversion (boost) ratio compared to the conventional active-clamped converter.", "title": "" }, { "docid": "ab813ff20324600d5b765377588c9475", "text": "Estimating the flows of rivers can have significant economic impact, as this can help in agricultural water management and in protection from water shortages and possible flood damage. The first goal of this paper is to apply neural networks to the problem of forecasting the flow of the River Nile in Egypt. The second goal of the paper is to utilize the time series as a benchmark to compare between several neural-network forecasting methods.We compare between four different methods to preprocess the inputs and outputs, including a novel method proposed here based on the discrete Fourier series. We also compare between three different methods for the multistep ahead forecast problem: the direct method, the recursive method, and the recursive method trained using a backpropagation through time scheme. We also include a theoretical comparison between these three methods. The final comparison is between different methods to perform longer horizon forecast, and that includes ways to partition the problem into the several subproblems of forecasting K steps ahead.", "title": "" } ]
scidocsrr
a32a6f293a22655c403fcf746949e9ac
Privometer: Privacy protection in social networks
[ { "docid": "1aa01ca2f1b7f5ea8ed783219fe83091", "text": "This paper presents NetKit, a modular toolkit for classifica tion in networked data, and a case-study of its application to a collection of networked data sets use d in prior machine learning research. Networked data are relational data where entities are inter connected, and this paper considers the common case where entities whose labels are to be estimated a re linked to entities for which the label is known. NetKit is based on a three-component framewo rk, comprising a local classifier, a relational classifier, and a collective inference procedur . Various existing relational learning algorithms can be instantiated with appropriate choices for the se three components and new relational learning algorithms can be composed by new combinations of c omponents. The case study demonstrates how the toolkit facilitates comparison of differen t learning methods (which so far has been lacking in machine learning research). It also shows how the modular framework allows analysis of subcomponents, to assess which, whether, and when partic ul components contribute to superior performance. The case study focuses on the simple but im portant special case of univariate network classification, for which the only information avai lable is the structure of class linkage in the network (i.e., only links and some class labels are avail ble). To our knowledge, no work previously has evaluated systematically the power of class-li nkage alone for classification in machine learning benchmark data sets. The results demonstrate clea rly th t simple network-classification models perform remarkably well—well enough that they shoul d be used regularly as baseline classifiers for studies of relational learning for networked dat a. The results also show that there are a small number of component combinations that excel, and that different components are preferable in different situations, for example when few versus many la be s are known.", "title": "" } ]
[ { "docid": "802935307aeede808cbcf3eb388dd65a", "text": "We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks (He et al., 2015), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken. The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena,, such as training on random labels with zero error (Zhang et al., 2017), the information bottleneck, the phase transition out of it and gradient variance explosion (Shwartz-Ziv & Tishby, 2017), shattered gradients (Balduzzi et al., 2017), and many more.", "title": "" }, { "docid": "8c36e881f03a1019158cdae2e5de876c", "text": "The projects with embedded systems are used for many different purposes, being a major challenge for the community of developers of such systems. As we benefit from technological advances the complexity of designing an embedded system increases significantly. This paper presents GERSE, a guideline to requirements elicitation for embedded systems. Despite of advances in the area of embedded systems, there is a shortage of requirements elicitation techniques that meet the particularities of this area. The contribution of GERSE is to improve the capture process and organization of the embedded systems requirements.", "title": "" }, { "docid": "aa80419c97d4461d602528def066f26b", "text": "Rheumatoid arthritis (RA) is a chronic inflammatory disease characterized by synovial inflammation that can lead to structural damage of cartilage, bone and tendons. Assessing the inflammatory activity and the severity is essential in RA to help rheumatologists in adopting proper therapeutic strategies and in evaluating disease outcome and response to treatment. In the last years musculoskeletal (MS) ultrasonography (US) underwent tremendous technological development of equipment with increased sensitivity in detecting a wide set of joint and soft tissues abnormalities. In RA MSUS with the use of Doppler modalities is a useful imaging tool to depict inflammatory abnormalities (i.e. synovitis, tenosynovitis and bursitis) and structural changes (i.e. bone erosions, cartilage damage and tendon lesions). In addition, MSUS has been demonstrated to be able to monitor the response to different therapies in RA to guide local diagnostic and therapeutic procedures such as biopsy, fluid aspirations and injections. Future applications based on the development of new tools may improve the role of MSUS in RA.", "title": "" }, { "docid": "19edeca01022e9392fd75bfa2807d4f7", "text": "This paper analyzes the impact of user mobility in multi-tier heterogeneous networks. We begin by obtaining the handoff rate for a mobile user in an irregular cellular network with the access point locations modeled as a homogeneous Poisson point process. The received signal-to-interference-ratio (SIR) distribution along with a chosen SIR threshold is then used to obtain the probability of coverage. To capture potential connection failures due to mobility, we assume that a fraction of handoffs result in such failures. Considering a multi-tier network with orthogonal spectrum allocation among tiers and the maximum biased average received power as the tier association metric, we derive the probability of coverage for two cases: 1) the user is stationary (i.e., handoffs do not occur, or the system is not sensitive to handoffs); 2) the user is mobile, and the system is sensitive to handoffs. We derive the optimal bias factors to maximize the coverage. We show that when the user is mobile, and the network is sensitive to handoffs, both the optimum tier association and the probability of coverage depend on the user's speed; a speed-dependent bias factor can then adjust the tier association to effectively improve the coverage, and hence system performance, in a fully-loaded network.", "title": "" }, { "docid": "5c29083624be58efa82b4315976f8dc2", "text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.", "title": "" }, { "docid": "919f42363fed69dc38eba0c46be23612", "text": "Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. In this tutorial, we introduce the characteristics and related mining challenges on dealing with big medical data. Many of those insights come from medical informatics community, which is highly related to data mining but focuses on biomedical specifics. We survey various related papers from data mining venues as well as medical informatics venues to share with the audiences key problems and trends in healthcare analytics research, with different applications ranging from clinical text mining, predictive modeling, survival analysis, patient similarity, genetic data analysis, and public health. The tutorial will include several case studies dealing with some of the important healthcare applications.", "title": "" }, { "docid": "1cab1fccebbf33f815421c8fe94f8251", "text": "This paper establishes a link between three areas, namely Max-Plus Linear System Theory as used for dealing with certain classes of discrete event systems, Network Calculu s for establishing time bounds in communication networks, and real-time scheduling. In particular, it is shown that im portant results from scheduling theory can be easily derive d and unified using Max-Plus Algebra. Based on the proposed network theory for real-time systems, the first polynomial algorithm for the feasibility analysis and optimal priorit y assignment for a general task model is derived.", "title": "" }, { "docid": "ada7b43edc18b321c57a978d7a3859ae", "text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.", "title": "" }, { "docid": "43579ff02692fcbd854f51ef22e9d537", "text": "Scoring the quality of persuasive essays is an important goal of discourse analysis, addressed most recently with highlevel persuasion-related features such as thesis clarity, or opinions and their targets. We investigate whether argumentation features derived from a coarse-grained argumentative structure of essays can help predict essays scores. We introduce a set of argumentation features related to argument components (e.g., the number of claims and premises), argument relations (e.g., the number of supported claims) and typology of argumentative structure (chains, trees). We show that these features are good predictors of human scores for TOEFL essays, both when the coarsegrained argumentative structure is manually annotated and automatically predicted.", "title": "" }, { "docid": "c052c9e920ae871fbf20a8560b87d887", "text": "This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings.", "title": "" }, { "docid": "15f46090f74282257979c38c5f151469", "text": "Integrating data from multiple sources has been a longstanding challenge in the database community. Techniques such as privacy-preserving data mining promises privacy, but assume data has integration has been accomplished. Data integration methods are seriously hampered by inability to share the data to be integrated. This paper lays out a privacy framework for data integration. Challenges for data integration in the context of this framework are discussed, in the context of existing accomplishments in data integration. Many of these challenges are opportunities for the data mining community.", "title": "" }, { "docid": "76dcd35124d95bffe47df5decdc5926a", "text": "While kernel drivers have long been know to poses huge security risks, due to their privileged access and lower code quality, bug-finding tools for drivers are still greatly lacking both in quantity and effectiveness. This is because the pointer-heavy code in these drivers present some of the hardest challenges to static analysis, and their tight coupling with the hardware make dynamic analysis infeasible in most cases. In this work, we present DR. CHECKER, a soundy (i.e., mostly sound) bug-finding tool for Linux kernel drivers that is based on well-known program analysis techniques. We are able to overcome many of the inherent limitations of static analysis by scoping our analysis to only the most bug-prone parts of the kernel (i.e., the drivers), and by only sacrificing soundness in very few cases to ensure that our technique is both scalable and precise. DR. CHECKER is a fully-automated static analysis tool capable of performing general bug finding using both pointer and taint analyses that are flow-sensitive, context-sensitive, and fieldsensitive on kernel drivers. To demonstrate the scalability and efficacy of DR. CHECKER, we analyzed the drivers of nine production Linux kernels (3.1 million LOC), where it correctly identified 158 critical zero-day bugs with an overall precision of 78%.", "title": "" }, { "docid": "8268f8de6dce81a98da5580650986b04", "text": "Deliberate self-poisoning (DSP), the most common form of deliberate self-harm, is closely associated with suicide. Identifying risk factors of DSP is necessary for implementing prevention strategies. This study aimed to evaluate the relationship between benzodiazepine (BZD) treatment in psychiatric outpatients and DSP cases at emergency departments (EDs). We performed a retrospective nested case–control study of psychiatric patients receiving BZD therapy to evaluate the relationship between BZD use and the diagnosis of DSP at EDs using data from the nationwide Taiwan National Health Insurance Research Database. Regression analysis yielded an odds ratio (OR) and 95 % confidence interval (95 % CI) indicating that the use of BZDs in psychiatric outpatients was significantly associated with DSP cases at EDs (OR = 4.46, 95 % CI = 3.59–5.53). Having a history of DSP, sleep disorders, anxiety disorders, schizophrenia, depression, or bipolar disorder was associated with a DSP diagnosis at EDs (OR = 13.27, 95 % CI = 8.28–21.29; OR = 5.04, 95 % CI = 4.25–5.98; OR = 3.95, 95 % CI = 3.32–4.70; OR = 7.80, 95 % CI = 5.28–11.52; OR = 15.20, 95 % CI = 12.22–18.91; and OR = 18.48, 95 % CI = 10.13–33.7, respectively). After adjusting for potential confounders, BZD use remained significantly associated with a subsequent DSP diagnosis (adjusted OR = 2.47, 95 % CI = 1.93–3.17). Patients taking higher average cumulative BZD doses were at greater risk of DSP. Vigilant evaluation of the psychiatric status of patients prescribed with BZD therapy is critical for the prevention of DSP events at EDs.", "title": "" }, { "docid": "d041a5fc5f788b1abd8abf35a26cb5d2", "text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.", "title": "" }, { "docid": "7e7a621393202649c45db3fa958cd466", "text": "Cloud computing with its three key facets (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latency-sensitive applications such as disaster management and content delivery applications. Service level agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices, or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This paper presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as tactile Internet.", "title": "" }, { "docid": "fa1b427e152ee84b8c38687ab84d1f7c", "text": "We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG [44], where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG’s per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth [18], we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 [11] for ImageNet [3], where our techniques produce improved accuracy (.15–.41% in precision@1) with substantially less computation (bypassing 25–40% of the layers).", "title": "" }, { "docid": "452156877885aa1883cb55cb3faefb5f", "text": "The smart grid changes the way how energy and information are exchanged and offers opportunities for incentive-based load balancing. For instance, customers may shift the time of energy consumption of household appliances in exchange for a cheaper energy tariff. This paves the path towards a full range of modular tariffs and dynamic pricing that incorporate the overall grid capacity as well as individual customer demands. This also allows customers to frequently switch within a variety of tariffs from different utility providers based on individual energy consumption and provision forecasts. For automated tariff decisions it is desirable to have a tool that assists in choosing the optimum tariff based on a prediction of individual energy need and production. However, the revelation of individual load patterns for smart grid applications poses severe privacy threats for customers as analyzed in depth in literature. Similarly, accurate and fine-grained regional load forecasts are sensitive business information of utility providers that are not supposed to be released publicly. This paper extends previous work in the domain of privacy-preserving load profile matching where load profiles from utility providers and load profile forecasts from customers are transformed in a distance-preserving embedding in order to find a matching tariff. The embeddings neither reveal individual contributions of customers, nor those of utility providers. Prior work requires a dedicated entity that needs to be trustworthy at least to some extent for determining the matches. In this paper we propose an adaption of this protocol, where we use blockchains and smart contracts for this matching process, instead. Blockchains are gaining widespread adaption in the smart grid domain as a powerful tool for public commitments and accountable calculations. While the use of a decentralized and trust-free blockchain for this protocol comes at the price of some privacy degradation (for which a mitigation is outlined), this drawback is outweighed for it enables verifiability, reliability and transparency. Fabian Knirsch, Andreas Unterweger, Günther Eibl and Dominik Engel Salzburg University of Applied Sciences, Josef Ressel Center for User-Centric Smart Grid Privacy, Security and Control, Urstein Süd 1, 5412 Puch bei Hallein, Austria. e-mail: fabian.knirsch@", "title": "" }, { "docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c", "text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.", "title": "" }, { "docid": "4726626317b296cca0ca7d62d194ac5a", "text": "This paper presents the main foundations of big data applied to smart cities. A general Internet of Things based architecture is proposed to be applied to different smart cities applications. We describe two scenarios of big data analysis. One of them illustrates some services implemented in the smart campus of the University of Murcia. The second one is focused on a tram service scenario, where thousands of transit-card transactions should be processed. Results obtained from both scenarios show the potential of the applicability of this kind of techniques to provide profitable services of smart cities, such as the management of the energy consumption and comfort in smart buildings, and the detection of travel profiles in smart transport.", "title": "" }, { "docid": "3bde393992b3055083e7348d360f7ec5", "text": "A new smart power switch for industrial, automotive and computer applications developed in BCD (Bipolar, CMOS, DMOS) technology is described. It consists of an on-chip 70 mΩ power DMOS transistor connected in high side configuration and its driver makes the device virtually indestructible and suitable to drive any kind of load with an output current of 2.5 A. If the load is inductive, an internal voltage clamp allows fast demagnetization down to 55 V under the supply voltage. The device includes novel structures for the driver, the fully integrated charge pump circuit and its oscillator. These circuits have specifically been designed to reduce ElectroMagnetic Interference (EMI) thanks to an accurate control of the output voltage slope and the reduction of the output voltage ripple caused by the charge pump itself (several patents pending). An innovative open load circuit allows the detection of the open load condition with high precision (2 to 4 mA within the temperature range and including process spreads). The quiescent current has also been reduced to 600 uA. Diagnostics for CPU feedback is available at the external connections of the chip when the following fault conditions occur: open load; output short circuit to supply voltage; overload or output short circuit to ground; over temperature; under voltage supply.", "title": "" } ]
scidocsrr
135f4254a084e49e8850309c718021a9
Simulation of a photovoltaic panels by using Matlab/Simulink
[ { "docid": "82bea5203ab102bbef0b8663d999abb2", "text": "This paper proposes a novel simplified two-diode model of a photovoltaic (PV) module. The main aim of this study is to represent a PV module as an ideal two-diode model. In order to reduce computational time, the proposed model has a photocurrent source, i.e., two ideal diodes, neglecting the series and shunt resistances. Only four unknown parameters from the datasheet are required in order to analyze the proposed model. The simulation results that are obtained by MATLAB/Simulink are validated with experimental data of a commercial PV module, using different PV technologies such as multicrystalline and monocrystalline, supplied by the manufacturer. It is envisaged that this work can be useful for professionals who require a simple and accurate PV simulator for their design.", "title": "" } ]
[ { "docid": "a9ea1f1f94a26181addac948837c3030", "text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cfd8458a802341eb20ffc14644cd9fad", "text": "Wireless Sensor Networks (WSNs) are crucial in supporting continuous environmental monitoring, where sensor nodes are deployed and must remain operational to collect and transfer data from the environment to a base-station. However, sensor nodes have limited energy in their primary power storage unit, and this energy may be quickly drained if the sensor node remains operational over long periods of time. Therefore, the idea of harvesting ambient energy from the immediate surroundings of the deployed sensors, to recharge the batteries and to directly power the sensor nodes, has recently been proposed. The deployment of energy harvesting in environmental field systems eliminates the dependency of sensor nodes on battery power, drastically reducing the maintenance costs required to replace batteries. In this article, we review the state-of-the-art in energy-harvesting WSNs for environmental monitoring applications, including Animal Tracking, Air Quality Monitoring, Water Quality Monitoring, and Disaster Monitoring to improve the ecosystem and human life. In addition to presenting the technologies for harvesting energy from ambient sources and the protocols that can take advantage of the harvested energy, we present challenges that must be addressed to further advance energy-harvesting-based WSNs, along with some future work directions to address these challenges.", "title": "" }, { "docid": "d79d6dd8267c66ad98f33bd54ff68693", "text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.", "title": "" }, { "docid": "bb408cedbb0fc32f44326eff7a7390f7", "text": "A fully integrated SONET OC-192 transmitter IC using a standard CMOS process consists of an input data register, FIFO, CMU, and 16:1 multiplexer to give a 10Gb/s serial output. A higher FEC rate, 10.7Gb/s, is supported. This chip, using a 0.18/spl mu/m process, exceeds SONET requirements, dissipating 450mW.", "title": "" }, { "docid": "6001982cb50621fe488034d6475d1894", "text": "Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.", "title": "" }, { "docid": "9e208e6beed62575a92f32031b7af8ad", "text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.", "title": "" }, { "docid": "b9087793bd9bcc37deef95d1eea09f25", "text": "BACKGROUND\nDolutegravir (GSK1349572), a once-daily HIV integrase inhibitor, has shown potent antiviral response and a favourable safety profile. We evaluated safety, efficacy, and emergent resistance in antiretroviral-experienced, integrase-inhibitor-naive adults with HIV-1 with at least two-class drug resistance.\n\n\nMETHODS\nING111762 (SAILING) is a 48 week, phase 3, randomised, double-blind, active-controlled, non-inferiority study that began in October, 2010. Eligible patients had two consecutive plasma HIV-1 RNA assessments of 400 copies per mL or higher (unless >1000 copies per mL at screening), resistance to two or more classes of antiretroviral drugs, and had one to two fully active drugs for background therapy. Participants were randomly assigned (1:1) to once-daily dolutegravir 50 mg or twice-daily raltegravir 400 mg, with investigator-selected background therapy. Matching placebo was given, and study sites were masked to treatment assignment. The primary endpoint was the proportion of patients with plasma HIV-1 RNA less than 50 copies per mL at week 48, evaluated in all participants randomly assigned to treatment groups who received at least one dose of study drug, excluding participants at one site with violations of good clinical practice. Non-inferiority was prespecified with a 12% margin; if non-inferiority was established, then superiority would be tested per a prespecified sequential testing procedure. A key prespecified secondary endpoint was the proportion of patients with treatment-emergent integrase-inhibitor resistance. The trial is registered at ClinicalTrials.gov, NCT01231516.\n\n\nFINDINGS\nAnalysis included 715 patients (354 dolutegravir; 361 raltegravir). At week 48, 251 (71%) patients on dolutegravir had HIV-1 RNA less than 50 copies per mL versus 230 (64%) patients on raltegravir (adjusted difference 7·4%, 95% CI 0·7 to 14·2); superiority of dolutegravir versus raltegravir was then concluded (p=0·03). Significantly fewer patients had virological failure with treatment-emergent integrase-inhibitor resistance on dolutegravir (four vs 17 patients; adjusted difference -3·7%, 95% CI -6·1 to -1·2; p=0·003). Adverse event frequencies were similar across groups; the most commonly reported events for dolutegravir versus raltegravir were diarrhoea (71 [20%] vs 64 [18%] patients), upper respiratory tract infection (38 [11%] vs 29 [8%]), and headache (33 [9%] vs 31 [9%]). Safety events leading to discontinuation were infrequent in both groups (nine [3%] dolutegravir, 14 [4%] raltegravir).\n\n\nINTERPRETATION\nOnce-daily dolutegravir, in combination with up to two other antiretroviral drugs, is well tolerated with greater virological effect compared with twice-daily raltegravir in this treatment-experienced patient group.\n\n\nFUNDING\nViiV Healthcare.", "title": "" }, { "docid": "93da542bb389c9ef6177f0cce6d6ad79", "text": "Public private partnerships (PPP) are long lasting contracts, generally involving large sunk investments, and developed in contexts of great uncertainty. If uncertainty is taken as an assumption, rather as a threat, it could be used as an opportunity. This requires managerial flexibility. The paper addresses the concept of contract flexibility as well as the several possibilities for its incorporation into PPP development. Based upon existing classifications, the authors propose a double entry matrix as a new model for contract flexibility. A case study has been selected – a hospital – to assess and evaluate the benefits of developing a flexible contract, building a model based on the real options theory. The evidence supports the initial thesis that allowing the concessionaire to adapt, under certain boundaries, the infrastructure and services to changing conditions when new information is known, does increase the value of the project. Some policy implications are drawn. © 2012 Elsevier Ltd. APM and IPMA. All rights reserved.", "title": "" }, { "docid": "4292a60a5f76fd3e794ce67d2ed6bde3", "text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.", "title": "" }, { "docid": "9201fc08a8479c6ef0908c3aeb12e5fe", "text": "Twitter is one of the most popular social media platforms that has 313 million monthly active users which post 500 million tweets per day. This popularity attracts the attention of spammers who use Twitter for their malicious aims such as phishing legitimate users or spreading malicious software and advertises through URLs shared within tweets, aggressively follow/unfollow legitimate users and hijack trending topics to attract their attention, propagating pornography. In August of 2014, Twitter revealed that 8.5% of its monthly active users which equals approximately 23 million users have automatically contacted their servers for regular updates. Thus, detecting and filtering spammers from legitimate users are mandatory in order to provide a spam-free environment in Twitter. In this paper, features of Twitter spam detection presented with discussing their effectiveness. Also, Twitter spam detection methods are categorized and discussed with their pros and cons. The outdated features of Twitter which are commonly used by Twitter spam detection approaches are highlighted. Some new features of Twitter which, to the best of our knowledge, have not been mentioned by any other works are also presented. Keywords—Twitter spam; spam detection; spam filtering;", "title": "" }, { "docid": "c1a96dbed9373dddd0a7a07770395a7e", "text": "Mobile devices are increasingly the dominant Internet access technology. Nevertheless, high costs, data caps, and throttling are a source of widespread frustration, and a significant barrier to adoption in emerging markets. This paper presents Flywheel, an HTTP proxy service that extends the life of mobile data plans by compressing responses in-flight between origin servers and client browsers. Flywheel is integrated with the Chrome web browser and reduces the size of proxied web pages by 50% for a median user. We report measurement results from millions of users as well as experience gained during three years of operating and evolving the production", "title": "" }, { "docid": "b46a9871dc64327f1ab79fa22de084ce", "text": "Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2^64). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.", "title": "" }, { "docid": "64411f1f8a998c9b23b9641fe1917db4", "text": "Microwave power transmission (MPT) has had a long history before the more recent movement toward wireless power transmission (WPT). MPT can be applied not only to beam-type point-to-point WPT but also to an energy harvesting system fed from distributed or broadcasting radio waves. The key technology is the use of a rectenna, or rectifying antenna, to convert a microwave signal to a DC signal with high efficiency. In this paper, various rectennas suitable for MPT are discussed, including various rectifying circuits, frequency rectennas, and power rectennas.", "title": "" }, { "docid": "0734e55ef60e9e1ef490c03a23f017e8", "text": "High-voltage (HV) pulses are used in pulsed electric field (PEF) applications to provide an effective electroporation process, a process in which harmful microorganisms are disinfected when subjected to a PEF. Depending on the PEF application, different HV pulse specifications are required such as the pulse-waveform shape, the voltage magnitude, the pulse duration, and the pulse repetition rate. In this paper, a generic pulse-waveform generator (GPG) is proposed, and the GPG topology is based on half-bridge modular multilevel converter (HB-MMC) cells. The GPG topology is formed of four identical arms of series-connected HB-MMC cells forming an H-bridge. Unlike the conventional HB-MMC-based converters in HVdc transmission, the GPG load power flow is not continuous which leads to smaller size cell capacitors utilization; hence, smaller footprint of the GPG is achieved. The GPG topology flexibility allows the controller software to generate a basic multilevel waveform which can be manipulated to generate the commonly used PEF pulse waveforms. Therefore, the proposed topology offers modularity, redundancy, and scalability. The viability of the proposed GPG converter is validated by MATLAB/Simulink simulation and experimentation.", "title": "" }, { "docid": "b395aa3ae750ddfd508877c30bae3a38", "text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "title": "" }, { "docid": "b80151949d837ffffdc680e9822b9691", "text": "Neuronal activity causes local changes in cerebral blood flow, blood volume, and blood oxygenation. Magnetic resonance imaging (MRI) techniques sensitive to changes in cerebral blood flow and blood oxygenation were developed by high-speed echo planar imaging. These techniques were used to obtain completely noninvasive tomographic maps of human brain activity, by using visual and motor stimulus paradigms. Changes in blood oxygenation were detected by using a gradient echo (GE) imaging sequence sensitive to the paramagnetic state of deoxygenated hemoglobin. Blood flow changes were evaluated by a spin-echo inversion recovery (IR), tissue relaxation parameter T1-sensitive pulse sequence. A series of images were acquired continuously with the same imaging pulse sequence (either GE or IR) during task activation. Cine display of subtraction images (activated minus baseline) directly demonstrates activity-induced changes in brain MR signal observed at a temporal resolution of seconds. During 8-Hz patterned-flash photic stimulation, a significant increase in signal intensity (paired t test; P less than 0.001) of 1.8% +/- 0.8% (GE) and 1.8% +/- 0.9% (IR) was observed in the primary visual cortex (V1) of seven normal volunteers. The mean rise-time constant of the signal change was 4.4 +/- 2.2 s for the GE images and 8.9 +/- 2.8 s for the IR images. The stimulation frequency dependence of visual activation agrees with previous positron emission tomography observations, with the largest MR signal response occurring at 8 Hz. Similar signal changes were observed within the human primary motor cortex (M1) during a hand squeezing task and in animal models of increased blood flow by hypercapnia. By using intrinsic blood-tissue contrast, functional MRI opens a spatial-temporal window onto individual brain physiology.", "title": "" }, { "docid": "37a574d4d969fc681c93508bd14cc904", "text": "A new low offset dynamic comparator for high resolution high speed analog-to-digital application has been designed. Inputs are reconfigured from the typical differential pair comparator such that near equal current distribution in the input transistors can be achieved for a meta-stable point of the comparator. Restricted signal swing clock for the tail current is also used to ensure constant currents in the differential pairs. Simulation based sensitivity analysis is performed to demonstrate the robustness of the new comparator with respect to stray capacitances, common mode voltage errors and timing errors in a TSMC 0.18mu process. Less than 10mV offset can be easily achieved with the proposed structure making it favorable for flash and pipeline data conversion applications", "title": "" }, { "docid": "f2742f6876bdede7a67f4ec63d73ead9", "text": "Momentum methods play a central role in optimization. Several momentum methods are provably optimal, and all use a technique called estimate sequences to analyze their convergence properties. The technique of estimate sequences has long been considered difficult to understand, leading many researchers to generate alternative, “more intuitive” methods and analyses. In this paper we show there is an equivalence between the technique of estimate sequences and a family of Lyapunov functions in both continuous and discrete time. This framework allows us to develop a simple and unified analysis of many existing momentum algorithms, introduce several new algorithms, and most importantly, strengthen the connection between algorithms and continuous-time dynamical systems.", "title": "" }, { "docid": "eeee6fceaec33b4b1ef5aed9f8b0dcf5", "text": "This paper presents a novel orthomode transducer (OMT) with the dimension of WR-10 waveguide. The internal structure of the OMT is in the shape of Y so we named it a Y-junction OMT, it contain one square waveguide port with the dimension 2.54mm × 2.54mm and two WR-10 rectangular waveguide ports with the dimension of 1.27mm × 2.54mm. The operating frequency band of OMT is 70-95GHz (more than 30% bandwidth) with simulated insertion loss <;-0.3dB and cross polarization better than -40dB throughout the band for both TE10 and TE01 modes.", "title": "" } ]
scidocsrr
8ab4c7502d246208fbb03518c0a34b02
Probabilistic sentential decision diagrams: Learning with massive logical constraints
[ { "docid": "25346cdef3e97173dab5b5499c4d4567", "text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.", "title": "" }, { "docid": "d104206fd95525192240e9a6d6aedd89", "text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.", "title": "" } ]
[ { "docid": "bd8788c3d4adc5f3671f741e884c7f34", "text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.", "title": "" }, { "docid": "94c6f94e805a366c6fa6f995f13a92ba", "text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.", "title": "" }, { "docid": "473d8cbcd597c961819c5be6ab2e658e", "text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.", "title": "" }, { "docid": "9d26d8e8b34319defd89a0daca3969e9", "text": "The paper presents a safe robot navigation system based on omnidirectional vision. The 360 degree camera images are analyzed for obstacle detection and avoidance and of course for navigating safely in the given indoor environment. This module can process images in real time and extracts the direction and distance information of the obstacles from the camera system mounted on the robot. This two data is the output of the module. Because of the distortions of the omnidirectional vision, it is necessary to calibrate the camera and not only for that but also to get the right direction and distances information. Several image processing methods and technics were used which are investigated in the rest of this paper.", "title": "" }, { "docid": "14ab095775e6687bde93fbe7849475f5", "text": "In this paper, we present an evolutionary trust game to investigate the formation of trust in the so-called sharing economy from a population perspective. To the best of our knowledge, this is the first attempt to model trust in the sharing economy using the evolutionary game theory framework. Our sharing economy trust model consists of four types of players: a trustworthy provider, an untrustworthy provider, a trustworthy consumer, and an untrustworthy consumer. Through systematic simulation experiments, five different scenarios with varying proportions and types of providers and consumers were considered. Our results show that each type of players influences the existence and survival of other types of players, and untrustworthy players do not necessarily dominate the population even when the temptation to defect (i.e., to be untrustworthy) is high. Our findings may have important implications for understanding the emergence of trust in the context of sharing economy transactions.", "title": "" }, { "docid": "7aca9b586e30a735c51ffe38ef858b38", "text": "In Chapter 1, Reigeluth described design theory as being different from descriptive theory in that it offers means to achieve goals. For an applied field like education, design theory is more useful and more easily applied than its descriptive counterpart, learning theory. But none of the 22 theories described in this book has yet been developed to a state of perfection; at very least they can all benefit from more detailed guidance for applying their methods to diverse situations. And more theories are sorely needed to provide guidance for additional kinds of learning and human development and for different kinds of situations, including the use of new information technologies as tools. This leads us to the important question, \" What research methods are most helpful for creating and improving instructional design theories? \" In this chapter, we offer a detailed description of one research methodology that holds much promise for generating the kind of knowledge that we believe is most useful to educators—a methodology that several theorists in this book have intuitively used to develop their theories. We refer to this methodology as \"formative research\"—a kind of developmental research or action research that is intended to improve design theory for designing instructional practices or processes. Reigeluth (1989) and Romiszowski (1988) have recommended this approach to expand the knowledge base in instructional-design theory. Newman (1990) has suggested something similar for research on the organizational impact of computers in schools. And Greeno, Collins and Resnick (1996) have identified several groups of researchers who are conducting something similar that they call \" design experiments, \" in which \" researchers and practitioners, particularly teachers, collaborate in the design, implementation, and analysis of changes in practice. \" (p. 15) Formative research has also been used for generating knowledge in as broad an area as systemic change in education We intend for this chapter to help guide educational researchers who are developing and refining instructional-design theories. Most researchers have not had the opportunity to learn formal research methodologies for developing design theories. Doctoral programs in universities tend to emphasize quantitative and qualitative research methodologies for creating descriptive knowledge of education. However, design theories are guidelines for practice, which tell us \"how to do\" education, not \"what is.\" We have found that traditional quantitative research methods (e.g., experiments, surveys, correlational analyses) are not particularly useful for improving instructional-design theory— especially in the early stages of development. Instead, …", "title": "" }, { "docid": "56b3a5ff0295d0ffce2a60dc60c0033a", "text": "This first installment of the new Human Augmentation department looks at various technologies designed to augment the human intellect and amplify human perception and cognition. Linking back to early work in interactive computing, Albrecht Schmidt considers how novel technologies can create a new relationship between digital technologies and humans.", "title": "" }, { "docid": "16d3a7217182ad331d85eb619fa459ee", "text": "Pupil diameter was monitored during picture viewing to assess effects of hedonic valence and emotional arousal on pupillary responses. Autonomic activity (heart rate and skin conductance) was concurrently measured to determine whether pupillary changes are mediated by parasympathetic or sympathetic activation. Following an initial light reflex, pupillary changes were larger when viewing emotionally arousing pictures, regardless of whether these were pleasant or unpleasant. Pupillary changes during picture viewing covaried with skin conductance change, supporting the interpretation that sympathetic nervous system activity modulates these changes in the context of affective picture viewing. Taken together, the data provide strong support for the hypothesis that the pupil's response during affective picture viewing reflects emotional arousal associated with increased sympathetic activity.", "title": "" }, { "docid": "b12defb3d9d7c5ccda8c3e0b0858f55f", "text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.", "title": "" }, { "docid": "088fdd091c2cc70f2e000622be4f3c62", "text": "Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.", "title": "" }, { "docid": "77e61d56d297b62e1078542fd74ffe5e", "text": "This paper introduces a complete design method to construct an adaptive fuzzy logic controller (AFLC) for DC–DC converter. In a conventional fuzzy logic controller (FLC), knowledge on the system supplied by an expert is required for developing membership functions (parameters) and control rules. The proposed AFLC, on the other hand, do not required expert for making parameters and control rules. Instead, parameters and rules are generated using a model data file, which contains summary of input–output pairs. The FLC use Mamdani type fuzzy logic controllers for the defuzzification strategy and inference operators. The proposed controller is designed and verified by digital computer simulation and then implemented for buck, boost and buck–boost converters by using an 8-bit microcontroller. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e1edaf3e8754e8403b9be29f58ba3550", "text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.", "title": "" }, { "docid": "22c6ae71c708d5e2d1bc7e5e085c4842", "text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.", "title": "" }, { "docid": "a4a56e0647849c22b48e7e5dc3f3049b", "text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process", "title": "" }, { "docid": "53cf85922865609c4a7591bd06679660", "text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.", "title": "" }, { "docid": "0a94a995f91afd641013b97dcec7da2a", "text": "Two competing encoding concepts are known to scale well with growing amounts of XML data: XPath Accelerator encoding implemented by MonetDB for in-memory documents and X-Hive’s Persistent DOM for on-disk storage. We identified two ways to improve XPath Accelerator and present prototypes for the respective techniques: BaseX boosts inmemory performance with optimized data and value index structures while Idefix introduces native block-oriented persistence with logarithmic update behavior for true scalability, overcoming main-memory constraints. An easy-to-use Java-based benchmarking framework was developed and used to consistently compare these competing techniques and perform scalability measurements. The established XMark benchmark was applied to all four systems under test. Additional fulltext-sensitive queries against the well-known DBLP database complement the XMark results. Not only did the latest version of X-Hive finally surprise with good scalability and performance numbers. Also, both BaseX and Idefix hold their promise to push XPath Accelerator to its limits: BaseX efficiently exploits available main memory to speedup XML queries while Idefix surpasses main-memory constraints and rivals the on-disk leadership of X-Hive. The competition between XPath Accelerator and Persistent DOM definitely is relaunched.", "title": "" }, { "docid": "e56af4a3a8fbef80493d77b441ee1970", "text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.", "title": "" }, { "docid": "7835a3ecdb9a8563e29ee122e5987503", "text": "Women diagnosed with complete spinal cord injury (SCI) at T10 or higher report sensations generated by vaginal-cervical mechanical self-stimulation (CSS). In this paper we review brain responses to sexual arousal and orgasm in such women, and further hypothesize that the afferent pathway for this unexpected perception is provided by the Vagus nerves, which bypass the spinal cord. Using functional magnetic resonance imaging (fMRI), we ascertained that the region of the medulla oblongata to which the Vagus nerves project (the Nucleus of the Solitary Tract or NTS) is activated by CSS. We also used an objective measure, CSS-induced analgesia response to experimentally induced finger pain, to ascertain the functionality of this pathway. During CSS, several women experienced orgasms. Brain regions activated during orgasm included the hypothalamic paraventricular nucleus, amygdala, accumbens-bed nucleus of the stria terminalis-preoptic area, hippocampus, basal ganglia (especially putamen), cerebellum, and anterior cingulate, insular, parietal and frontal cortices, and lower brainstem (central gray, mesencephalic reticular formation, and NTS). We conclude that the Vagus nerves provide a spinal cord-bypass pathway for vaginal-cervical sensibility and that activation of this pathway can produce analgesia and orgasm.", "title": "" }, { "docid": "7ee4a708d41065c619a5bf9e86f871a3", "text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.", "title": "" } ]
scidocsrr
819cab6856ab332744e87d70cdd04247
A Supervised Patch-Based Approach for Human Brain Labeling
[ { "docid": "3342e2f79a6bb555797224ac4738e768", "text": "Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.", "title": "" }, { "docid": "6df12ee53551f4a3bd03bca4ca545bf1", "text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.", "title": "" } ]
[ { "docid": "097e2c17a34db96ba37f68e28058ceba", "text": "ARTICLE The healing properties of compassion have been written about for centuries. The Dalai Lama often stresses that if you want others to be happy – focus on compassion; if you want to be happy yourself – focus on compassion (Dalai Lama 1995, 2001). Although all clinicians agree that compassion is central to the doctor–patient and therapist–client relationship, recently the components of com­ passion have been looked at through the lens of Western psychological science and research 2003a,b). Compassion can be thought of as a skill that one can train in, with in creasing evidence that focusing on and practising com passion can influence neurophysiological and immune systems (Davidson 2003; Lutz 2008). Compassion­focused therapy refers to the under pinning theory and process of applying a compassion model to psy­ chotherapy. Compassionate mind training refers to specific activities designed to develop compassion­ ate attributes and skills, particularly those that influence affect regula tion. Compassion­focused therapy adopts the philosophy that our under­ standing of psychological and neurophysiological processes is developing at such a rapid pace that we are now moving beyond 'schools of psychotherapy' towards a more integrated, biopsycho social science of psycho therapy (Gilbert 2009). Compassion­focused therapy and compassionate mind training arose from a number of observations. First, people with high levels of shame and self­ criticism can have enormous difficulty in being kind to themselves, feeling self­warmth or being self­compassionate. Second, it has long been known that problems of shame and self­criticism are often rooted in histories of abuse, bullying, high expressed emo­ tion in the family, neglect and/or lack of affection Individuals subjected to early experiences of this type can become highly sensitive to threats of rejection or criticism from the outside world and can quickly become self­attacking: they experience both their external and internal worlds as easily turning hostile. Third, it has been recognised that working with shame and self­criticism requires a thera peutic focus on memories of such early experiences And fourth, there are clients who engage with the cognitive and behavioural tasks of a therapy, and become skilled at generating (say) alternatives for their negative thoughts and beliefs, but who still do poorly in therapy (Rector 2000). They are likely to say, 'I understand the logic of my alterna­ tive thinking but it doesn't really help me feel much better' or 'I know I'm not to blame for the abuse but I still feel that I …", "title": "" }, { "docid": "660fe15405c2006e20bcf0e4358c7283", "text": "We introduce a framework for feature selection based on depe ndence maximization between the selected features and the labels of an estimation problem, u sing the Hilbert-Schmidt Independence Criterion. The key idea is that good features should be highl y dependent on the labels. Our approach leads to a greedy procedure for feature selection. We show that a number of existing feature selectors are special cases of this framework. Experiments on both artificial and real-world data show that our feature selector works well in practice.", "title": "" }, { "docid": "c6a25dc466e4a22351359f17bd29916c", "text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1", "title": "" }, { "docid": "7ffbc12161510aa8ef01d804df9c5648", "text": "Networks represent relationships between entities in many complex systems, spanning from online social interactions to biological cell development and brain connectivity. In many cases, relationships between entities are unambiguously known: are two users “friends” in a social network? Do two researchers collaborate on a published article? Do two road segments in a transportation system intersect? These are directly observable in the system in question. In most cases, relationships between nodes are not directly observable and must be inferred: Does one gene regulate the expression of another? Do two animals who physically co-locate have a social bond? Who infected whom in a disease outbreak in a population?\n Existing approaches for inferring networks from data are found across many application domains and use specialized knowledge to infer and measure the quality of inferred network for a specific task or hypothesis. However, current research lacks a rigorous methodology that employs standard statistical validation on inferred models. In this survey, we examine (1) how network representations are constructed from underlying data, (2) the variety of questions and tasks on these representations over several domains, and (3) validation strategies for measuring the inferred network’s capability of answering questions on the system of interest.", "title": "" }, { "docid": "ef8d88d57858706ba269a8f3aaa989f3", "text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.", "title": "" }, { "docid": "43fc8ff9339780cc91762a28e36aaad7", "text": "The Internet of things(IoT) has brought the vision of the smarter world into reality and including healthcare it has a many application domains. The convergence of IOT-cloud can play a significant role in the smart healthcare by offering better insight of healthcare content to support affordable and quality patient care. In this paper, we proposed a model that allows the sensor to monitor the patient's symptom. The collected monitored data transmitted to the gateway via Bluetooth and then to the cloud server through docker container using the internet. Thus enabling the physician to diagnose and monitor health problems wherever the patient is. Also, we address the several challenges related to health monitoring and management using IoT.", "title": "" }, { "docid": "a1fe64aacbbe80a259feee2874645f09", "text": "Database consolidation is gaining wide acceptance as a means to reduce the cost and complexity of managing database systems. However, this new trend poses many interesting challenges for understanding and predicting system performance. The consolidated databases in multi-tenant settings share resources and compete with each other for these resources. In this work we present an experimental study to highlight how these interactions can be fairly complex. We argue that individual database staging or workload profiling is not an adequate approach to understanding the performance of the consolidated system. Our initial investigations suggest that machine learning approaches that use monitored data to model the system can work well for important tasks.", "title": "" }, { "docid": "39cde8c4da81d72d7a0ff058edb71409", "text": "One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such aComplex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines . We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels.", "title": "" }, { "docid": "231365d1de30f3529752510ec718dd38", "text": "The lack of reliability of gliding contacts in highly constrained environments induces manufacturers to develop contactless transmission power systems such as rotary transformers. The following paper proposes an optimal design methodology for rotary transformers supplied from a low-voltage source at high temperatures. The method is based on an accurate multidisciplinary analysis model divided into magnetic, thermal and electrical parts, optimized thanks to a sequential quadratic programming method. The technique is used to discuss the design particularities of rotary transformers. Two optimally designed structures of rotary transformers : an iron silicon coaxial one and a ferrite pot core one, are compared.", "title": "" }, { "docid": "e94183f4200b8c6fef1f18ec0e340869", "text": "Hoon Sohn Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C926 Los Alamos National Laboratory, Los Alamos, NM 87545 e-mail: sohn@lanl.gov Charles R. Farrar Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C946 e-mail: farrar@lanl.gov Norman F. Hunter Engineering Sciences & Applications Division, Measurement Technology Group, M/S C931 e-mail: hunter@lanl.gov Keith Worden Department of Mechanical Engineering University of Sheffield Mappin St. Sheffield S1 3JD, United Kingdom e-mail: k.worden@sheffield.ac.uk", "title": "" }, { "docid": "e677799d3bee1b25e74dc6c547c1b6c2", "text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.", "title": "" }, { "docid": "daac9ee402eebc650fe4f98328a7965d", "text": "5.1. Detection Formats 475 5.2. Food Quality and Safety Analysis 477 5.2.1. Pathogens 477 5.2.2. Toxins 479 5.2.3. Veterinary Drugs 479 5.2.4. Vitamins 480 5.2.5. Hormones 480 5.2.6. Diagnostic Antibodies 480 5.2.7. Allergens 481 5.2.8. Proteins 481 5.2.9. Chemical Contaminants 481 5.3. Medical Diagnostics 481 5.3.1. Cancer Markers 481 5.3.2. Antibodies against Viral Pathogens 482 5.3.3. Drugs and Drug-Induced Antibodies 483 5.3.4. Hormones 483 5.3.5. Allergy Markers 483 5.3.6. Heart Attack Markers 484 5.3.7. Other Molecular Biomarkers 484 5.4. Environmental Monitoring 484 5.4.1. Pesticides 484 5.4.2. 2,4,6-Trinitrotoluene (TNT) 485 5.4.3. Aromatic Hydrocarbons 485 5.4.4. Heavy Metals 485 5.4.5. Phenols 485 5.4.6. Polychlorinated Biphenyls 487 5.4.7. Dioxins 487 5.5. Summary 488 6. Conclusions 489 7. Abbreviations 489 8. Acknowledgment 489 9. References 489", "title": "" }, { "docid": "96d90b5e2046b4629f1625649256ecaa", "text": "Today's smartphones are equipped with precise motion sensors like accelerometer and gyroscope, which can measure tiny motion and rotation of devices. While they make mobile applications more functional, they also bring risks of leaking users' privacy. Researchers have found that tap locations on screen can be roughly inferred from motion data of the device. They mostly utilized this side-channel for inferring short input like PIN numbers and passwords, with repeated attempts to boost accuracy. In this work, we study further for longer input inference, such as chat record and e-mail content, anything a user ever typed on a soft keyboard. Since people increasingly rely on smartphones for daily activities, their inputs directly or indirectly expose privacy about them. Thus, it is a serious threat if their input text is leaked.\n To make our attack practical, we utilize the shared memory side-channel for detecting window events and tap events of a soft keyboard. The up or down state of the keyboard helps triggering our Trojan service for collecting accelerometer and gyroscope data. Machine learning algorithms are used to roughly predict the input text from the raw data and language models are used to further correct the wrong predictions. We performed experiments on two real-life scenarios, which were writing emails and posting Twitter messages, both through mobile clients. Based on the experiments, we show the feasibility of inferring long user inputs to readable sentences from motion sensor data. By applying text mining technology on the inferred text, more sensitive information about the device owners can be exposed.", "title": "" }, { "docid": "a5e960a4b20959a1b4a85e08eebab9d3", "text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.", "title": "" }, { "docid": "b2e02a1818f862357cf5764afa7fa197", "text": "The goal of this paper is the automatic identification of characters in TV and feature film material. In contrast to standard approaches to this task, which rely on the weak supervision afforded by transcripts and subtitles, we propose a new method requiring only a cast list. This list is used to obtain images of actors from freely available sources on the web, providing a form of partial supervision for this task. In using images of actors to recognize characters, we make the following three contributions: (i) We demonstrate that an automated semi-supervised learning approach is able to adapt from the actor’s face to the character’s face, including the face context of the hair; (ii) By building voice models for every character, we provide a bridge between frontal faces (for which there is plenty of actor-level supervision) and profile (for which there is very little or none); and (iii) by combining face context and speaker identification, we are able to identify characters with partially occluded faces and extreme facial poses. Results are presented on the TV series ‘Sherlock’ and the feature film ‘Casablanca’. We achieve the state-of-the-art on the Casablanca benchmark, surpassing previous methods that have used the stronger supervision available from transcripts.", "title": "" }, { "docid": "b9d25bdbb337a9d16a24fa731b6b479d", "text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.", "title": "" }, { "docid": "afce201838e658aac3e18c2f26cff956", "text": "With the current set of design tools and methods available to game designers, vast portions of the space of possible games are not currently reachable. In the past, technological advances such as improved graphics and new controllers have driven the creation of new forms of gameplay, but games have still not made great strides into new gameplay experiences. We argue that the development of innovative artificial intelligence (AI) systems plays a crucial role in the exploration of currently unreachable spaces. To aid in exploration, we suggest a practice called AI-based game design, an iterative design process that deeply integrates the affordances of an AI system within the context of game design. We have applied this process in our own projects, and in this paper we present how it has pushed the boundaries of current game genres and experiences, as well as discuss the future AI-based game design.", "title": "" }, { "docid": "37e552e4352cd5f8c76dcefd856e0fc8", "text": "Following the increasing popularity of mobile ecosystems, cybercriminals have increasingly targeted them, designing and distributing malicious apps that steal information or cause harm to the device’s owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach. To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls’ sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and user-generated inputs. We find that combining both static and dynamic analysis yields the best performance, with F -measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and, finally, investigate the reasons for inconsistent misclassifications across methods.", "title": "" }, { "docid": "eb7eb6777a68fd594e2e94ac3cba6be9", "text": "Cellulosic plant material represents an as-of-yet untapped source of fermentable sugars for significant industrial use. Many physio-chemical structural and compositional factors hinder the enzymatic digestibility of cellulose present in lignocellulosic biomass. The goal of any pretreatment technology is to alter or remove structural and compositional impediments to hydrolysis in order to improve the rate of enzyme hydrolysis and increase yields of fermentable sugars from cellulose or hemicellulose. These methods cause physical and/or chemical changes in the plant biomass in order to achieve this result. Experimental investigation of physical changes and chemical reactions that occur during pretreatment is required for the development of effective and mechanistic models that can be used for the rational design of pretreatment processes. Furthermore, pretreatment processing conditions must be tailored to the specific chemical and structural composition of the various, and variable, sources of lignocellulosic biomass. This paper reviews process parameters and their fundamental modes of action for promising pretreatment methods.", "title": "" }, { "docid": "036cbf58561de8bfa01ddc4fa8d7b8f2", "text": "The purpose of this paper is to discover a semi-optimal set of trading rules and to investigate its effectiveness as applied to Egyptian Stocks. The aim is to mix different categories of technical trading rules and let an automatic evolution process decide which rules are to be used for particular time series. This difficult task can be achieved by using genetic algorithms (GA's), they permit the creation of artificial experts taking their decisions from an optimal subset of the a given set of trading rules. The GA's based on the survival of the fittest, do not guarantee a global optimum but they are known to constitute an effective approach in optimizing non-linear functions. Selected liquid stocks are tested and GA trading rules were compared with other conventional and well known technical analysis rules. The Proposed GA system showed clear better average profit and in the same high sharpe ratio, which indicates not only good profitability but also better risk-reward trade-off", "title": "" } ]
scidocsrr
5009c4e6aecd17bd7a2e9b3f2f74a0db
Iterative Entity Alignment via Joint Knowledge Embeddings
[ { "docid": "99d9dcef0e4441ed959129a2a705c88e", "text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: daniel.rinser@alumni.hpi.uni-potsdam.de (Daniel Rinser), dustin.lange@hpi.uni-potsdam.de (Dustin Lange), naumann@hpi.uni-potsdam.de (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions", "title": "" } ]
[ { "docid": "c2c5f0f8b4647c651211b50411382561", "text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.", "title": "" }, { "docid": "eca2bfe1b96489e155e19d02f65559d6", "text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays", "title": "" }, { "docid": "4d894156dd1ad6864eb6b47ed6bee085", "text": "Preference learning is a fundamental problem in various smart computing applications such as personalized recommendation. Collaborative filtering as a major learning technique aims to make use of users’ feedback, for which some recent works have switched from exploiting explicit feedback to implicit feedback. One fundamental challenge of leveraging implicit feedback is the lack of negative feedback, because there is only some observed relatively “positive” feedback available, making it difficult to learn a prediction model. In this paper, we propose a new and relaxed assumption of pairwise preferences over item-sets, which defines a user’s preference on a set of items (item-set) instead of on a single item only. The relaxed assumption can give us more accurate pairwise preference relationships. With this assumption, we further develop a general algorithm called CoFiSet (collaborative filtering via learning pairwise preferences over item-sets), which contains four variants, CoFiSet(SS), CoFiSet(MOO), CoFiSet(MOS) and CoFiSet(MSO), representing “Set vs. Set,” “Many ‘One vs. One’,” “Many ‘One vs. Set”’ and “Many ‘Set vs. One”’ pairwise comparisons, respectively. Experimental results show that our CoFiSet(MSO) performs better than several state-of-the-art methods on five ranking-oriented evaluation metrics on three real-world data sets.", "title": "" }, { "docid": "44d468d53b66f719e569ea51bb94f6cb", "text": "The paper gives an overview on the developments at the German Aerospace Center DLR towards anthropomorphic robots which not only tr y to approach the force and velocity performance of humans, but also have simi lar safety and robustness features based on a compliant behaviour. We achieve thi s compliance either by joint torque sensing and impedance control, or, in our newes t systems, by compliant mechanisms (so called VIA variable impedance actuators), whose intrinsic compliance can be adjusted by an additional actuator. Both appr o ches required highly integrated mechatronic design and advanced, nonlinear con trol a d planning strategies, which are presented in this paper.", "title": "" }, { "docid": "306a33d3ad0f70eb6fa2209c63747a6f", "text": "Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.", "title": "" }, { "docid": "0d2ddb448c01172e53f19d9d5ac39f21", "text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.", "title": "" }, { "docid": "8a21ff7f3e4d73233208d5faa70eb7ce", "text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.", "title": "" }, { "docid": "779cc0258ae35fd3b6d70c2a62a1a857", "text": "Opinion mining and sentiment analysis have become popular in linguistic resource rich languages. Opinions for such analysis are drawn from many forms of freely available online/electronic sources, such as websites, blogs, news re-ports and product reviews. But attention received by less resourced languages is significantly less. This is because the success of any opinion mining algorithm depends on the availability of resources, such as special lexicon and WordNet type tools. In this research, we implemented a less complicated but an effective approach that could be used to classify comments in less resourced languages. We experimented the approach for use with Sinhala Language where no such opinion mining or sentiment analysis has been carried out until this day. Our algorithm gives significantly promising results for analyzing sentiments in Sinhala for the first time.", "title": "" }, { "docid": "f420d1dc56ab1d78533ebff9754fbcce", "text": "The purpose of this study was to survey the mental toughness and physical activity among student university of Tabriz. Baecke physical activity questionnaire, mental thoughness48 and demographic questionnaire was distributed between students. 355 questionnaires were collected. Correlation, , multiple ANOVA and independent t-test was used for analyzing the hypotheses. The result showed that there was significant relationship between some of physical activity and mental toughness subscales. Two groups active and non-active were compared to find out the mental toughness differences, Student who obtained the 75% upper the physical activity questionnaire was active (n=97) and Student who obtained the 25% under the physical activity questionnaire was inactive group (n=95).The difference between active and non-active physically people showed that active student was significantly mentally toughness. It is expected that changes in physical activity levels significantly could be evidence of mental toughness changes, it should be noted that the other variables should not be ignored.", "title": "" }, { "docid": "2361e70109a3595241b2cdbbf431659d", "text": "There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given problem. Generally a given problem is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given problem as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's problem solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in problem modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving generalized assignment problem which is known as NP-hard problem is presented in detail along with some comparisons. It is a well known fact that classical optimization techniques impose several limitations on solving mathematical programming and operational research models. This is mainly due to inherent solution mechanisms of these techniques. Solution strategies of classical optimization algorithms are generally depended on the type of objective and constraint", "title": "" }, { "docid": "8b054ce1961098ec9c7d66db33c53abd", "text": "This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the accuracy of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are still considered the standard sensor, which comparatively provide much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.", "title": "" }, { "docid": "53821da1274fd420fe0f7eeba024b95d", "text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.", "title": "" }, { "docid": "7f57322b6e998d629d1a67cd5fb28da9", "text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.", "title": "" }, { "docid": "6ee2ee4a1cff7b1ddb8e5e1e2faf3aa5", "text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.", "title": "" }, { "docid": "0879399fcb38c103a0e574d6d9010215", "text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.", "title": "" }, { "docid": "c24550119d4251d6d7ce1219b8aa0ee4", "text": "This article considers the delivery of efficient and effective dental services for patients whose disability and/or medical condition may not be obvious and which consequently can present a hidden challenge in the dental setting. Knowing that the patient has a particular condition, what its features are and how it impacts on dental treatment and oral health, and modifying treatment accordingly can minimise the risk of complications. The taking of a careful medical history that asks the right questions in a manner that encourages disclosure is key to highlighting hidden hazards and this article offers guidance for treating those patients who have epilepsy, latex sensitivity, acquired or inherited bleeding disorders and patients taking oral or intravenous bisphosphonates.", "title": "" }, { "docid": "2393fc67fdca6b98695d0940fba19ca3", "text": "Evaluation of network security is an essential step in securing any network. This evaluation can help security professionals in making optimal decisions about how to design security countermeasures, to choose between alternative security architectures, and to systematically modify security configurations in order to improve security. However, the security of a network depends on a number of dynamically changing factors such as emergence of new vulnerabilities and threats, policy structure and network traffic. Identifying, quantifying and validating these factors using security metrics is a major challenge in this area. In this paper, we propose a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network. We then describe our rigorous validation experiments using real- life vulnerability data of the past 6 years from National Vulnerability Database (NVD) [10] to show the high accuracy and confidence of the proposed metrics. Some previous works have considered vulnerabilities using code analysis. However, as far as we know, this is the first work to study and analyze these metrics for network security evaluation using publicly available vulnerability information and security policy configuration.", "title": "" }, { "docid": "99cd180d0bb08e6360328b77219919c1", "text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.", "title": "" }, { "docid": "bb404a57964fcd5500006e039ba2b0dd", "text": "The needs of the child are paramount. The clinician’s first task is to diagnose the cause of symptoms and signs whether accidental, inflicted or the result of an underlying medical condition. Where abuse is diagnosed the task is to safeguard the child and treat the physical and psychological effects of maltreatment. A child is one who has not yet reached his or her 18th birthday. Child abuse is any action by another person that causes significant harm to a child or fails to meet a basic need. It involves acts of both commission and omission with effects on the child’s physical, developmental, and psychosocial well-being. The vast majority of carers from whatever walk of life, love, nurture and protect their children. A very few, in a momentary loss of control in an otherwise caring parent, cause much regretted injury. An even smaller number repeatedly maltreat their children in what becomes a pattern of abuse. One parent may harm, the other may fail to protect by omitting to seek help. Child abuse whether physical or psychological is unlawful.", "title": "" } ]
scidocsrr
b10bd07f3a3c5cb0ff56d279dac00f02
Modelling IT projects success with Fuzzy Cognitive Maps
[ { "docid": "447c36d34216b8cb890776248d9cc010", "text": "Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.", "title": "" } ]
[ { "docid": "347509d68f6efd4da747a7a3e704a9a2", "text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.", "title": "" }, { "docid": "5b0eaf636d6d8cf0523e3f00290b780f", "text": "Toward materializing the recently identified potential of cognitive neuroscience for IS research (Dimoka, Pavlou and Davis 2007), this paper demonstrates how functional neuroimaging tools can enhance our understanding of IS theories. Specifically, this study aims to uncover the neural mechanisms that underlie technology adoption by identifying the brain areas activated when users interact with websites that differ on their level of usefulness and ease of use. Besides localizing the neural correlates of the TAM constructs, this study helps understand their nature and dimensionality, as well as uncover hidden processes associated with intentions to use a system. The study also identifies certain technological antecedents of the TAM constructs, and shows that the brain activations associated with perceived usefulness and perceived ease of use predict selfreported intentions to use a system. The paper concludes by discussing the study’s implications for underscoring the potential of functional neuroimaging for IS research and the TAM literature.", "title": "" }, { "docid": "7f070d85f4680a2b88d3b530dff0cfc5", "text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.", "title": "" }, { "docid": "a33f862d0b7dfde7b9f18aa193db9acf", "text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor  awais.shakoor22@gmail.com Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).", "title": "" }, { "docid": "d0253bb3efe714e6a34e8dd5fc7dcf81", "text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.", "title": "" }, { "docid": "73267467deec2701d6628a0d3572132e", "text": "Neuromyelitis optica (NMO) is an inflammatory CNS syndrome distinct from multiple sclerosis (MS) that is associated with serum aquaporin-4 immunoglobulin G antibodies (AQP4-IgG). Prior NMO diagnostic criteria required optic nerve and spinal cord involvement but more restricted or more extensive CNS involvement may occur. The International Panel for NMO Diagnosis (IPND) was convened to develop revised diagnostic criteria using systematic literature reviews and electronic surveys to facilitate consensus. The new nomenclature defines the unifying term NMO spectrum disorders (NMOSD), which is stratified further by serologic testing (NMOSD with or without AQP4-IgG). The core clinical characteristics required for patients with NMOSD with AQP4-IgG include clinical syndromes or MRI findings related to optic nerve, spinal cord, area postrema, other brainstem, diencephalic, or cerebral presentations. More stringent clinical criteria, with additional neuroimaging findings, are required for diagnosis of NMOSD without AQP4IgG or when serologic testing is unavailable. The IPND also proposed validation strategies and achieved consensus on pediatric NMOSD diagnosis and the concepts of monophasic NMOSD and opticospinal MS. Neurology® 2015;85:1–13 GLOSSARY ADEM 5 acute disseminated encephalomyelitis; AQP4 5 aquaporin-4; IgG 5 immunoglobulin G; IPND 5 International Panel for NMO Diagnosis; LETM 5 longitudinally extensive transverse myelitis lesions; MOG 5 myelin oligodendrocyte glycoprotein; MS 5 multiple sclerosis; NMO 5 neuromyelitis optica; NMOSD 5 neuromyelitis optica spectrum disorders; SLE 5 systemic lupus erythematosus; SS 5 Sjögren syndrome. Neuromyelitis optica (NMO) is an inflammatory CNS disorder distinct from multiple sclerosis (MS). It became known as Devic disease following a seminal 1894 report. Traditionally, NMO was considered a monophasic disorder consisting of simultaneous bilateral optic neuritis and transverse myelitis but relapsing cases were described in the 20th century. MRI revealed normal brain scans and$3 vertebral segment longitudinally extensive transverse myelitis lesions (LETM) in NMO. The nosology of NMO, especially whether it represented a topographically restricted form of MS, remained controversial. A major advance was the discovery that most patients with NMO have detectable serum antibodies that target the water channel aquaporin-4 (AQP4–immunoglobulin G [IgG]), are highly specific for clinically diagnosed NMO, and have pathogenic potential. In 2006, AQP4-IgG serology was incorporated into revised NMO diagnostic criteria that relaxed clinical From the Departments of Neurology (D.M.W.) and Library Services (K.E.W.), Mayo Clinic, Scottsdale, AZ; the Children’s Hospital of Philadelphia (B.B.), PA; the Departments of Neurology and Ophthalmology (J.L.B.), University of Colorado Denver, Aurora; the Service de Neurologie (P.C.), Centre Hospitalier Universitaire de Fort de France, Fort-de-France, Martinique; Department of Neurology (W.C.), Sir Charles Gairdner Hospital, Perth, Australia; the Department of Neurology (T.C.), Massachusetts General Hospital, Boston; the Department of Neurology (J.d.S.), Strasbourg University, France; the Department of Multiple Sclerosis Therapeutics (K.F.), Tohoku University Graduate School of Medicine, Sendai, Japan; the Departments of Neurology and Neurotherapeutics (B.G.), University of Texas Southwestern Medical Center, Dallas; The Walton Centre NHS Trust (A.J.), Liverpool, UK; the Molecular Neuroimmunology Group, Department of Neurology (S.J.), University Hospital Heidelberg, Germany; the Center for Multiple Sclerosis Investigation (M.L.-P.), Federal University of Minas Gerais Medical School, Belo Horizonte, Brazil; the Department of Neurology (M.L.), Johns Hopkins University, Baltimore, MD; Portland VA Medical Center and Oregon Health and Sciences University (J.H.S.), Portland; the Department of Neurology (S.T.), National Pediatric Hospital Dr. Juan P. Garrahan, Buenos Aires, Argentina; the Department of Medicine (A.L.T.), University of British Columbia, Vancouver, Canada; Nuffield Department of Clinical Neurosciences (P.W.), University of Oxford, UK; and the Department of Neurology (B.G.W.), Mayo Clinic, Rochester, MN. Go to Neurology.org for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. The Article Processing Charge was paid by the Guthy-Jackson Charitable Foundation. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially. © 2015 American Academy of Neurology 1 a 2015 American Academy of Neurology. Unauthorized reproduction of this article is prohibited. Published Ahead of Print on June 19, 2015 as 10.1212/WNL.0000000000001729", "title": "" }, { "docid": "6a1073b72ef20fd59e705400dbdcc868", "text": "In today’s world, there is a continuous global need for more energy which, at the same time, has to be cleaner than the energy produced from the traditional generation technologies. This need has facilitated the increasing penetration of distributed generation (DG) technologies and primarily of renewable energy sources (RES). The extensive use of such energy sources in today’s electricity networks can indisputably minimize the threat of global warming and climate change. However, the power output of these energy sources is not as reliable and as easy to adjust to changing demand cycles as the output from the traditional power sources. This disadvantage can only be effectively overcome by the storing of the excess power produced by DG-RES. Therefore, in order for these new sources to become completely reliable as primary sources of energy, energy storage is a crucial factor. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Most of the technologies are in use today while others are still under intensive research and development. A comparison between the various technologies is presented in terms of the most important technological characteristics of each technology. The comparison shows that each storage technology is different in terms of its ideal network application environment and energy storage scale. This means that in order to achieve optimum results, the unique network environment and the specifications of the storage device have to be studied thoroughly, before a decision for the ideal storage technology to be selected is taken. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e67bb4c784b89b2fee1ab7687b545683", "text": "Many people have a strong intuition that there is something morallyobjectionable about playing violent video games, particularly withincreases in the number of people who are playing them and the games'alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethicaltheories to analyze the possibility that there might be some philosophicalfoundation for these intuitions. I raise the broader question of whetheror not participating in authentic simulations of immoral acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian hassubstantial objections to violent game playing, although they offersome important insights into playing games in general and what it ismorally to be a ``good sport.'' The Aristotelian, however, has a plausibleand intuitive way to protest participation in authentic simulations ofviolent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live afulfilled eudaimonic life.", "title": "" }, { "docid": "b36cc742445db810d40c884a90e2cf42", "text": "Telecommunication sector generates a huge amount of data due to increasing number of subscribers, rapidly renewable technologies; data based applications and other value added service. This data can be usefully mined for churn analysis and prediction. Significant research had been undertaken by researchers worldwide to understand the data mining practices that can be used for predicting customer churn. This paper provides a review of around 100 recent journal articles starting from year 2000 to present the various data mining techniques used in multiple customer based churn models. It then summarizes the existing telecom literature by highlighting the sample size used, churn variables employed and the findings of different DM techniques. Finally, we list the most popular techniques for churn prediction in telecom as decision trees, regression analysis and clustering, thereby providing a roadmap to new researchers to build upon novel churn management models.", "title": "" }, { "docid": "6bf38b6decda962ea03ab429f5fbde4f", "text": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "e16bf4ab7c56b6827369f19afb2d4744", "text": "In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent architecture to model long term dependency in sequential data and has achieved better performance over RNNs on acoustic modeling and language modeling tasks. In this work, we propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization. We also make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the Switchboard task, the proposed new cFSMN structures can reduce the model size by 60% and speed up the learning by more than 7 times while the models still significantly outperform the popular bidirection LSTMs for both frame-level cross-entropy (CE) criterion based training and MMI based sequence training.", "title": "" }, { "docid": "fbcdb3d565519b47922394dc9d84985f", "text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.", "title": "" }, { "docid": "b8322d65e61be7fb252b2e418df85d3e", "text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]", "title": "" }, { "docid": "b4bd19c2285199e280cb41e733ec5498", "text": "In the past few years, mobile augmented reality (AR) has attracted a great deal of attention. It presents us a live, direct or indirect view of a real-world environment whose elements are augmented (or supplemented) by computer-generated sensory inputs such as sound, video, graphics or GPS data. Also, deep learning has the potential to improve the performance of current AR systems. In this paper, we propose a distributed mobile logo detection framework. Our system consists of mobile AR devices and a back-end server. Mobile AR devices can capture real-time videos and locally decide which frame should be sent to the back-end server for logo detection. The server schedules all detection jobs to minimise the maximum latency. We implement our system on the Google Nexus 5 and a desktop with a wireless network interface. Evaluation results show that our system can detect the view change activity with an accuracy of 95:7% and successfully process 40 image processing jobs before deadline. ARTICLE HISTORY Received 6 June 2018 Accepted 30 June 2018", "title": "" }, { "docid": "cc2a7d6ac63f12b29a6d30f20b5547be", "text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.", "title": "" }, { "docid": "5b0b8da7faa91343bad6296fd7cb181f", "text": "Transportation research relies heavily on a variety of data. From sensors to surveys, data supports dayto-day operations as well as long-term planning and decision-making. The challenges that arise due to the volume and variety of data that are found in transportation research can be effectively addressed by ontologies. This opportunity has already been recognized – there are a number of existing transportation ontologies, however the relationship between them is unclear. The goal of this work is to provide an overview of the opportunities for ontologies in transportation research and operation, and to present a survey of existing transportation ontologies to serve two purposes: (1) to provide a resource for the transportation research community to aid in understanding (and potentially selecting between) existing transportation ontologies; and (2) to identify future work for the development of transportation ontologies, by identifying areas that may be lacking.", "title": "" }, { "docid": "96f616c7a821c1f74fc77e5649483343", "text": "Study of the forecasting models using large scale microblog discussions and the search behavior data can provide a good insight for better understanding the market movements. In this work we collected a dataset of 2 million tweets and search volume index (SVI from Google) for a period of June 2010 to September 2011. We model a set of comprehensive causative relationships over this dataset for various market securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100), commodity markets (oil and gold) and Euro Forex rates. We also investigate the lagged and statistically causative relations of Twitter sentiments developed during active trading days and market inactive days in combination with the search behavior of public before any change in the prices/indices. Our results show extent of lagged significance with high correlation value upto 0.82 between search volumes and gold price in USD. We find weekly accuracy in direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100 with significant reduction in mean average percentage error for all the forecasting models.", "title": "" }, { "docid": "a5168d6ca63300f26b7388f67d10cb3c", "text": "In recent years, the improvement of wireless protocols, the development of cloud services and the lower cost of hardware have started a new era for smart homes. One such enabling technologies is fog computing, which extends cloud computing to the edge of a network allowing for developing novel Internet of Things (IoT) applications and services. Under the IoT fog computing paradigm, IoT gateways are usually utilized to exchange messages with IoT nodes and a cloud. WiFi and ZigBee stand out as preferred communication technologies for smart homes. WiFi has become very popular, but it has a limited application due to its high energy consumption and the lack of standard mesh networking capabilities for low-power devices. For such reasons, ZigBee was selected by many manufacturers for developing wireless home automation devices. As a consequence, these technologies may coexist in the 2.4 GHz band, which leads to collisions, lower speed rates and increased communications latencies. This article presents ZiWi, a distributed fog computing Home Automation System (HAS) that allows for carrying out seamless communications among ZigBee and WiFi devices. This approach diverges from traditional home automation systems, which often rely on expensive central controllers. In addition, to ease the platform's building process, whenever possible, the system makes use of open-source software (all the code of the nodes is available on GitHub) and Commercial Off-The-Shelf (COTS) hardware. The initial results, which were obtained in a number of representative home scenarios, show that the developed fog services respond several times faster than the evaluated cloud services, and that cross-interference has to be taken seriously to prevent collisions. In addition, the current consumption of ZiWi's nodes was measured, showing the impact of encryption mechanisms.", "title": "" }, { "docid": "52d2ff16f6974af4643a15440ae09fec", "text": "The adoption of Course Management Systems (CMSs) for web-based instruction continues to increase in today’s higher education. A CMS is a software program or integrated platform that contains a series of web-based tools to support a number of activities and course management procedures (Severson, 2004). Examples of Course Management Systems are Blackboard, WebCT, eCollege, Moodle, Desire2Learn, Angel, etc. An argument for the adoption of elearning environments using CMSs is the flexibility of such environments when reaching out to potential learners in remote areas where brick and mortar institutions are non-existent. It is also believed that e-learning environments can have potential added learning benefits and can improve students’ and educators’ self-regulation skills, in particular their metacognitive skills. In spite of this potential to improve learning by means of using a CMS for the delivery of e-learning, the features and functionalities that have been built into these systems are often underutilized. As a consequence, the created learning environments in CMSs do not adequately scaffold learners to improve their selfregulation skills. In order to support the improvement of both the learners’ subject matter knowledge and learning strategy application, the e-learning environments within CMSs should be designed to address learners’ diversity in terms of learning styles, prior knowledge, culture, and self-regulation skills. Self-regulative learners are learners who can demonstrate ‘personal initiative, perseverance and adaptive skill in pursuing learning’ (Zimmerman, 2002). Self-regulation requires adequate monitoring strategies and metacognitive skills. The created e-learning environments should encourage the application of learners’ metacognitive skills by prompting learners to plan, attend to relevant content, and monitor and evaluate their learning. This position paper sets out to inform policy makers, educators, researchers, and others of the importance of a metacognitive e-learning approach when designing instruction using Course Management Systems. Such a metacognitive approach will improve the utilization of CMSs to support learners on their path to self-regulation. We argue that a powerful CMS incorporates features and functionalities that can provide extensive scaffolding to learners and support them in becoming self-regulated learners. Finally, we believe that extensive training and support is essential if educators are expected to develop and implement CMSs as powerful learning tools.", "title": "" } ]
scidocsrr
52da20345849c0f54f802559d5450dfd
Heart rate monitoring from wrist-type PPG based on singular spectrum analysis with motion decision
[ { "docid": "7c98ac06ea8cb9b83673a9c300fb6f4c", "text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.", "title": "" } ]
[ { "docid": "8a85d05f4ed31d3dba339bb108b39ba4", "text": "Access to genetic and genomic resources can greatly facilitate biological understanding of plant species leading to improved crop varieties. While model plant species such as Arabidopsis have had nearly two decades of genetic and genomic resource development, many major crop species have seen limited development of these resources due to the large, complex nature of their genomes. Cultivated potato is among the ranks of crop species that, despite substantial worldwide acreage, have seen limited genetic and genomic tool development. As technologies advance, this paradigm is shifting and a number of tools are being developed for important crop species such as potato. This review article highlights numerous tools that have been developed for the potato community with a specific focus on the reference de novo genome assembly and annotation, genetic markers, transcriptomics resources, and newly emerging resources that extend beyond a single reference individual. El acceso a los recursos genéticos y genómicos puede facilitar en gran medida el entendimiento biológico de las especies de plantas, lo que conduce a variedades mejoradas de cultivos. Mientras que el modelo de las especies de plantas como Arabidopsis ha tenido cerca de dos décadas de desarrollo de recursos genéticos y genómicos, muchas especies de cultivos principales han visto desarrollo limitado de estos recursos debido a la naturaleza grande, compleja, de sus genomios. La papa cultivada está ubicada entre las especies de plantas que a pesar de su superficie substancial mundial, ha visto limitado el desarrollo de las herramientas genéticas y genómicas. A medida que avanzan las tecnologías, este paradigma está girando y se han estado desarrollando un número de herramientas para especies importantes de cultivo tales como la papa. Este artículo de revisión resalta las numerosas herramientas que se han desarrollado para la comunidad de la papa con un enfoque específico en la referencia de ensamblaje y registro de genomio de novo, marcadores genéticos, recursos transcriptómicos, y nuevas fuentes emergentes que se extienden más allá de la referencia de un único individuo.", "title": "" }, { "docid": "9832eb4b5d47267d7b99e87bf853d30e", "text": "Generative Adversarial Networks (GANs) have recently achieved significant improvement on paired/unpaired image-to-image translation, such as photo→ sketch and artist painting style transfer. However, existing models can only be capable of transferring the low-level information (e.g. color or texture changes), but fail to edit high-level semantic meanings (e.g., geometric structure or content) of objects. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, which aims to modify the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow→sheep, motor→ bicycle, cat→dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs. Quantitative results further demonstrate the superiority of our model on generating manipulated results with high visual fidelity and reasonable object semantics.", "title": "" }, { "docid": "7f58cbda4cf0a08fec5515ef2ba3c931", "text": "Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm — generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above — the results also show that our approach produces better classification results than similar GAN models.", "title": "" }, { "docid": "20707cdc68b15fe46aaece52ca6aff62", "text": "The potential cardiovascular benefits of several trending foods and dietary patterns are still incompletely understood, and nutritional science continues to evolve. However, in the meantime, a number of controversial dietary patterns, foods, and nutrients have received significant media exposure and are mired by hype. This review addresses some of the more popular foods and dietary patterns that are promoted for cardiovascular health to provide clinicians with accurate information for patient discussions in the clinical setting.", "title": "" }, { "docid": "5faa1d3acdd057069fb1dab75d7b0803", "text": "The past 10 years of event ordering research has focused on learning partial orderings over document events and time expressions. The most popular corpus, the TimeBank, contains a small subset of the possible ordering graph. Many evaluations follow suit by only testing certain pairs of events (e.g., only main verbs of neighboring sentences). This has led most research to focus on specific learners for partial labelings. This paper attempts to nudge the discussion from identifying some relations to all relations. We present new experiments on strongly connected event graphs that contain ∼10 times more relations per document than the TimeBank. We also describe a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves. Each sieve adds labels to the event graph one at a time, and earlier sieves inform later ones through transitive closure. This paper thus describes innovations in both approach and task. We experiment on the densest event graphs to date and show a 14% gain over state-of-the-art.", "title": "" }, { "docid": "274485dd39c0727c99fcc0a07d434b25", "text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.", "title": "" }, { "docid": "c6ad38fa33666cf8d28722b9a1127d07", "text": "Weakly-supervised semantic image segmentation suffers from lacking accurate pixel-level annotations. In this paper, we propose a novel graph convolutional network-based method, called GraphNet, to learn pixel-wise labels from weak annotations. Firstly, we construct a graph on the superpixels of a training image by combining the low-level spatial relation and high-level semantic content. Meanwhile, scribble or bounding box annotations are embedded into the graph, respectively. Then, GraphNet takes the graph as input and learns to predict high-confidence pseudo image masks by a convolutional network operating directly on graphs. At last, a segmentation network is trained supervised by these pseudo image masks. We comprehensively conduct experiments on the PASCAL VOC 2012 and PASCAL-CONTEXT segmentation benchmarks. Experimental results demonstrate that GraphNet is effective to predict the pixel labels with scribble or bounding box annotations. The proposed framework yields state-of-the-art results in the community.", "title": "" }, { "docid": "9d8b088c8a97b8aa52703c1fcf877675", "text": "The project proposes an efficient implementation for IoT (Internet of Things) used for monitoring and controlling the home appliances via World Wide Web. Home automation system uses the portable devices as a user interface. They can communicate with home automation network through an Internet gateway, by means of low power communication protocols like Zigbee, Wi-Fi etc. This project aims at controlling home appliances via Smartphone using Wi-Fi as communication protocol and raspberry pi as server system. The user here will move directly with the system through a web-based interface over the web, whereas home appliances like lights, fan and door lock are remotely controlled through easy website. An extra feature that enhances the facet of protection from fireplace accidents is its capability of sleuthing the smoke in order that within the event of any fireplace, associates an alerting message and an image is sent to Smartphone. The server will be interfaced with relay hardware circuits that control the appliances running at home. The communication with server allows the user to select the appropriate device. The communication with server permits the user to pick out the acceptable device. The server communicates with the corresponding relays. If the web affiliation is down or the server isn't up, the embedded system board still will manage and operate the appliances domestically. By this we provide a climbable and price effective Home Automation system.", "title": "" }, { "docid": "db83931d7fef8174acdb3a1f4ef0d043", "text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.", "title": "" }, { "docid": "796c2741afdce3e718306a93e83c1856", "text": "Multi-document summarization has been an important problem in information retrieval. It aims to distill the most important information from a set of documents to generate a compressed summary. Given a sentence graph generated from a set of documents where vertices represent sentences and edges indicate that the corresponding vertices are similar, the extracted summary can be described using the idea of graph domination. In this paper, we propose a new principled and versatile framework for multi-document summarization using the minimum dominating set. We show that four well-known summarization tasks including generic, query-focused, update, and comparative summarization can be modeled as different variations derived from the proposed framework. Approximation algorithms for performing summarization are also proposed and empirical experiments are conducted to demonstrate the effectiveness of our proposed framework.", "title": "" }, { "docid": "d6136f26c7b387693a5f017e6e2e679a", "text": "Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology.", "title": "" }, { "docid": "6e47d81ddb9a1632d0ef162c92b0a454", "text": "Neural machine translation (NMT) systems have recently achieved results comparable to the state of the art on a few translation tasks, including English→French and English→German. The main purpose of the Montreal Institute for Learning Algorithms (MILA) submission to WMT’15 is to evaluate this new approach on a greater variety of language pairs. Furthermore, the human evaluation campaign may help us and the research community to better understand the behaviour of our systems. We use the RNNsearch architecture, which adds an attention mechanism to the encoderdecoder. We also leverage some of the recent developments in NMT, including the use of large vocabularies, unknown word replacement and, to a limited degree, the inclusion of monolingual language models.", "title": "" }, { "docid": "b02dcd4d78f87d8ac53414f0afd8604b", "text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.", "title": "" }, { "docid": "b610e9bef08ef2c133a02e887b89b196", "text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.", "title": "" }, { "docid": "c3d1470f049b9531c3af637408f5f9cb", "text": "Information and communication technology (ICT) is integral in today’s healthcare as a critical piece of support to both track and improve patient and organizational outcomes. Facilitating nurses’ informatics competency development through continuing education is paramount to enhance their readiness to practice safely and accurately in technologically enabled work environments. In this article, we briefly describe progress in nursing informatics (NI) and share a project exemplar that describes our experience in the design, implementation, and evaluation of a NI educational event, a one-day boot camp format that was used to provide foundational knowledge in NI targeted primarily at frontline nurses in Alberta, Canada. We also discuss the project outcomes, including lessons learned and future implications. Overall, the boot camp was successful to raise nurses’ awareness about the importance of informatics in nursing practice.", "title": "" }, { "docid": "a7f2acee9997f3bcb9bbb528bb383a94", "text": "Identifying sparse salient structures from dense pixels is a longstanding problem in visual computing. Solutions to this problem can benefit both image manipulation and understanding. In this paper, we introduce an image transform based on the L1 norm for piecewise image flattening. This transform can effectively preserve and sharpen salient edges and contours while eliminating insignificant details, producing a nearly piecewise constant image with sparse structures. A variant of this image transform can perform edge-preserving smoothing more effectively than existing state-of-the-art algorithms. We further present a new method for complex scene-level intrinsic image decomposition. Our method relies on the above image transform to suppress surface shading variations, and perform probabilistic reflectance clustering on the flattened image instead of the original input image to achieve higher accuracy. Extensive testing on the Intrinsic-Images-in-the-Wild database indicates our method can perform significantly better than existing techniques both visually and numerically. The obtained intrinsic images have been successfully used in two applications, surface retexturing and 3D object compositing in photographs.", "title": "" }, { "docid": "3663d877d157c8ba589e4d699afc460f", "text": "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.", "title": "" }, { "docid": "0db1e1304ec2b5d40790677c9ce07394", "text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.", "title": "" }, { "docid": "f407ea856f2d00dca1868373e1bd9e2f", "text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.", "title": "" }, { "docid": "2907badaf086752657c09d45fa99111e", "text": "The 3L-NPC (three-level neutral-point-clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies. A new PWM strategy is also proposed in the paper. It has numerous advantages: (a) natural doubling of the apparent switching frequency without using the flying-capacitor concept, (b) dead times do not influence the operating mode at 50% of the duty cycle, (c) operating at both high and small switching frequencies without structural modifications and (d) better balancing of loss distribution in switches. The PSIM simulation results are shown in order to validate the proposed PWM strategy and the analysis of the switching states.", "title": "" } ]
scidocsrr
a7cce9cae6e35e04b891a7e3a340ab2b
Sex & the City . How Emotional Factors Affect Financial Choices
[ { "docid": "4fa7ee44cdc4b0cd439723e9600131bd", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "9a1201d68018fce5ce413511dc64e8b7", "text": "In the health sciences it is quite common to carry out studies designed to determine the influence of one or more variables upon a given response variable. When this response variable is numerical, simple or multiple regression techniques are used, depending on the case. If the response variable is a qualitative variable (dichotomic or polychotomic), as for example the presence or absence of a disease, linear regression methodology is not applicable, and simple or multinomial logistic regression is used, as applicable.", "title": "" } ]
[ { "docid": "c367d19e00816538753e6226785d05fd", "text": "BACKGROUND AND OBJECTIVE\nMildronate, an inhibitor of carnitine-dependent metabolism, is considered to be an anti-ischemic drug. This study is designed to evaluate the efficacy and safety of mildronate injection in treating acute ischemic stroke.\n\n\nMETHODS\nWe performed a randomized, double-blind, multicenter clinical study of mildronate injection for treating acute cerebral infarction. 113 patients in the experimental group received mildronate injection, and 114 patients in the active-control group received cinepazide injection. In addition, both groups were given aspirin as a basic treatment. Modified Rankin Scale (mRS) score was performed at 2 weeks and 3 months after treatment. National Institutes of Health Stroke Scale (NIHSS) score and Barthel Index (BI) score were performed at 2 weeks after treatment, and then vital signs and adverse events were evaluated.\n\n\nRESULTS\nA total of 227 patients were randomized to treatment (n = 113, mildronate; n = 114, active-control). After 3 months, there was no significant difference for the primary endpoint between groups categorized in terms of mRS scores of 0-1 and 0-2 (p = 0.52 and p = 0.07, respectively). There were also no significant differences for the secondary endpoint between groups categorized in terms of NIHSS scores of >5 and >8 (p = 0.98 and p = 0.97, respectively) or BI scores of >75 and >95 (p = 0.49 and p = 0.47, respectively) at 15 days. The incidence of serious adverse events was similar between the two groups.\n\n\nCONCLUSION\nMildronate injection is as effective and safe as cinepazide injection in treating acute cerebral infarction.", "title": "" }, { "docid": "b2332b118b846c9f417558a02975e20a", "text": "This is the third in a series of four tutorial papers on biomedical signal processing and concerns the estimation of the power spectrum (PS) and coherence function (CF) od biomedical data. The PS is introduced and its estimation by means of the discrete Fourier transform is considered in terms of the problem of resolution in the frequency domain. The periodogram is introduced and its variance, bias and the effects of windowing and smoothing are considered. The use of the autocovariance function as a stage in power spectral estimation is described and the effects of windows in the autocorrelation domain are compared with the related effects of windows in the original time domain. The concept of coherence is introduced and the many ways in which coherence functions might be estimated are considered.", "title": "" }, { "docid": "99ea14010fe3acd37952fb355a25b71c", "text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.", "title": "" }, { "docid": "001784246312172835ceca4461ec28c5", "text": "Abstract-The string-searching problem is to find all occurrences of pattern(s) in a text string. The Aho-Corasick string searching algorithm simultaneously finds all occurrences of multiple patterns in one pass through the text. On the other hand, the Boyer-Moore algorithm is understood to be the fastest algorithm for a single pattern. By combining the ideas of these two algorithms, we present an efficient string searching algorithm for multiple patterns. The algorithm runs in sublinear time, on the average, as the BM algorithm achieves, and its preprocessing time is linear proportional to the sum of the lengths of the patterns like the AC algorithm.", "title": "" }, { "docid": "8e1b10ebb48b86ce151ab44dc0473829", "text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.", "title": "" }, { "docid": "2a5f555c00d98a87fe8dd6d10e27dc38", "text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.", "title": "" }, { "docid": "350aeae5c69db969c35c673c0be2a98a", "text": "Driver yawning detection is one of the key technologies used in driver fatigue monitoring systems. Real-time driver yawning detection is a very challenging problem due to the dynamics in driver's movements and lighting conditions. In this paper, we present a yawning detection system that consists of a face detector, a nose detector, a nose tracker and a yawning detector. Deep learning algorithms are developed for detecting driver face area and nose location. A nose tracking algorithm that combines Kalman filter with a dedicated open-source TLD (Track-Learning-Detection) tracker is developed to generate robust tracking results under dynamic driving conditions. Finally a neural network is developed for yawning detection based on the features including nose tracking confidence value, gradient features around corners of mouth and face motion features. Experiments are conducted on real-world driving data, and results show that the deep convolutional networks can generate a satisfactory classification result for detecting driver's face and nose when compared with other pattern classification methods, and the proposed yawning detection system is effective in real-time detection of driver's yawning states.", "title": "" }, { "docid": "ab2c4d5317d2e10450513283c21ca6d3", "text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.", "title": "" }, { "docid": "ed28faf2ff89ac4da642593e1b7eef9c", "text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.", "title": "" }, { "docid": "1dbb34265c9b01f69262b3270fa24e97", "text": "Binary content-addressable memory (BiCAM) is a popular high speed search engine in hardware, which provides output typically in one clock cycle. But speed of CAM comes at the cost of various disadvantages, such as high latency, low storage density, and low architectural scalability. In addition, field-programmable gate arrays (FPGAs), which are used in many applications because of its advantages, do not have hard IPs for CAM. Since FPGAs have embedded IPs for random-access memories (RAMs), several RAM-based CAM architectures on FPGAs are available in the literature. However, these architectures are especially targeted for ternary CAMs, not for BiCAMs; thus, the available RAM-based CAMs may not be fully beneficial for BiCAMs in terms of architectural design. Since modern FPGAs are enriched with logical resources, why not to configure them to design BiCAM on FPGA? This letter presents a logic-based high performance BiCAM architecture (LH-CAM) using Xilinx FPGA. The proposed CAM is composed of CAM words and associated comparators. A sample of LH-CAM of size ${64\\times 36}$ is implemented on Xilinx Virtex-6 FPGA. Compared with the latest prior work, the proposed CAM is much simpler in architecture, storage efficient, reduces power consumption by 40.92%, and improves speed by 27.34%.", "title": "" }, { "docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0", "text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.", "title": "" }, { "docid": "15b05bdc1310d038110b545686082c98", "text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.", "title": "" }, { "docid": "b8b2d68955d6ed917900d30e4e15f71e", "text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "11b05bd0c0b5b9319423d1ec0441e8a7", "text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.", "title": "" }, { "docid": "d19a77b3835b7b43acf57da377b11cb4", "text": "Given the importance of relation or event extraction from biomedical research publications to support knowledge capture and synthesis, and the strong dependency of approaches to this information extraction task on syntactic information, it is valuable to understand which approaches to syntactic processing of biomedical text have the highest performance. We perform an empirical study comparing state-of-the-art traditional feature-based and neural network-based models for two core natural language processing tasks of part-of-speech (POS) tagging and dependency parsing on two benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge, there is no recent work making such comparisons in the biomedical context; specifically no detailed analysis of neural models on this data is available. Experimental results show that in general, the neural models outperform the feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We also perform a task-oriented evaluation to investigate the influences of these models in a downstream application on biomedical event extraction, and show that better intrinsic parsing performance does not always imply better extrinsic event extraction performance. We have presented a detailed empirical study comparing traditional feature-based and neural network-based models for POS tagging and dependency parsing in the biomedical context, and also investigated the influence of parser selection for a biomedical event extraction downstream task. We make the retrained models available at https://github.com/datquocnguyen/BioPosDep.", "title": "" }, { "docid": "9bf99d48bc201147a9a9ad5af547a002", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "4bfc1e2fbb2b1dea29360c410e5258b4", "text": "Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.", "title": "" }, { "docid": "35dda21bd1f2c06a446773b0bfff2dd7", "text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.", "title": "" }, { "docid": "df78e51c3ed3a6924bf92db6000062e1", "text": "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for two criteria: arrival time and number of transfers. Existing algorithms consider this as a graph problem, and solve it using variants of Dijkstra’s algorithm. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstrabased, looks at each route (such as a bus line) in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Moreover, it can be easily extended to handle flexible departure times or arbitrary additional criteria, such as fare zones. When run on London’s complex public transportation network, RAPTOR computes all Paretooptimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.", "title": "" }, { "docid": "6d5480bf1ee5d401e39f5e65d0aaba25", "text": "Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.", "title": "" } ]
scidocsrr
8d77035c1879c1e48446e074ee226c60
Case Studies of Damage to Tall Steel Moment-Frame Buildings in Southern California during Large San Andreas Earthquakes
[ { "docid": "a112a01246256e38b563f616baf02cef", "text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: krishnan@caltech.edu 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125", "title": "" } ]
[ { "docid": "b20720aa8ea6fa5fc0f738a605534fbe", "text": "Œe proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of di‚usion is known as early rumor detection, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. Œus, identifying trending rumors demands an ecient yet ƒexible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classi€cation algorithms to rumor detection in earliness since they rely on hand-cra‰ed features which require intensive manual e‚orts in the case of large amount of posts. Œis paper presents a deep aŠention model on the basis of recurrent neural networks (RNN) to learn selectively temporal hidden representations of sequential posts for identifying rumors. Œe proposed model delves so‰-aŠention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep aŠention based RNN model outperforms state-of-thearts that rely on hand-cra‰ed features; (2) the introduction of so‰ aŠention mechanism can e‚ectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.", "title": "" }, { "docid": "ffef016fba37b3dc167a1afb7e7766f0", "text": "We show that the Thompson Sampling algorithm achieves logarithmic expected regret for the Bernoulli multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time T is O( lnT ∆ + 1 ∆3 ). And, for the N -armed bandit problem, the expected regret in time T is O( [ ( ∑N i=2 1 ∆i ) ] lnT ). Our bounds are optimal but for the dependence on ∆i and the constant factors in big-Oh.", "title": "" }, { "docid": "49a538fc40d611fceddd589b0c9cb433", "text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.", "title": "" }, { "docid": "97281ba9e6da8460f003bb860836bb10", "text": "In this letter, a novel miniaturized periodic element for constructing a bandpass frequency selective surface (FSS) is proposed. Compared to previous miniaturized structures, the FSS proposed has better miniaturization performance with the dimension of a unit cell only 0.061 λ × 0.061 λ , where λ represents the wavelength of the resonant frequency. Moreover, the miniaturization characteristic is stable with respect to different polarizations and incident angles of the waves illuminating. Both simulation and measurement are taken, and the results obtained demonstrate the claimed performance.", "title": "" }, { "docid": "5bf9ebaecbcd4b713a52d3572e622cbd", "text": "Essay scoring is a complicated processing requiring analyzing, summarizing and judging expertise. Traditional work on essay scoring focused on automatic handcrafted features, which are expensive yet sparse. Neural models offer a way to learn syntactic and semantic features automatically, which can potentially improve upon discrete features. In this paper, we employ convolutional neural network (CNN) for the effect of automatically learning features, and compare the result with the state-of-art discrete baselines. For in-domain and domain-adaptation essay scoring tasks, our neural model empirically outperforms discrete models.", "title": "" }, { "docid": "931c75847fdfec787ad6a31a6568d9e3", "text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.", "title": "" }, { "docid": "182dc182f7c814c18cb83a0515149cec", "text": "This paper discusses about methods for detection of leukemia. Various image processing techniques are used for identification of red blood cell and immature white cells. Different disease like anemia, leukemia, malaria, deficiency of vitamin B12, etc. can be diagnosed accordingly. Objective is to detect the leukemia affected cells and count it. According to detection of immature blast cells, leukemia can be identified and also define that either it is chronic or acute. To detect immature cells, number of methods are used like histogram equalization, linear contrast stretching, some morphological techniques like area opening, area closing, erosion, dilation. Watershed transform, K means, histogram equalization & linear contrast stretching, and shape based features are accurate 72.2%, 72%, 73.7 % and 97.8% respectively.", "title": "" }, { "docid": "4f3e37db8d656fe1e746d6d3a37878b5", "text": "Shorter product life cycles and aggressive marketing, among other factors, have increased the complexity of sales forecasting. Forecasts are often produced using a Forecasting Support System that integrates univariate statistical forecasting with managerial judgment. Forecasting sales under promotional activity is one of the main reasons to use expert judgment. Alternatively, one can replace expert adjustments by regression models whose exogenous inputs are promotion features (price, display, etc.). However, these regression models may have large dimensionality as well as multicollinearity issues. We propose a novel promotional model that overcomes these limitations. It combines Principal Component Analysis to reduce the dimensionality of the problem and automatically identifies the demand dynamics. For items with limited history, the proposed model is capable of providing promotional forecasts by selectively pooling information across established products. The performance of the model is compared against forecasts provided by experts and statistical benchmarks, on weekly data; outperforming both substantially.", "title": "" }, { "docid": "ff076ca404a911cc523af1aa51da8f47", "text": "Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of “big data”. However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the “human-in-the-loop” approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-“human-in-the-loop” approach, particularly in opening the “black box”, thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.", "title": "" }, { "docid": "e0ba4e4b7af3cba6bed51f2f697ebe5e", "text": "In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuses on modifying the network through filter pruning, such that it efficiently utilizes the FPGA hardware whilst satisfying a predefined accuracy threshold. Although fewer weights are re-moved in comparison to traditional pruning techniques designed for software implementations, the overall model complexity and feature map storage is greatly reduced. We implement the AlexNet and TinyYolo networks on the large-scale ImageNet and PascalVOC datasets, to demonstrate up to roughly 2× speedup in frames per second and 2× reduction in resource requirements over the original network, with equal or improved accuracy.", "title": "" }, { "docid": "15f0c49a2ddcb20cd8acaa419b2eae44", "text": "Automatic generation of presentation slides for academic papers is a very challenging task. Previous methods for addressing this task are mainly based on document summarization techniques and they extract document sentences to form presentation slides, which are not well-structured and concise. In this study, we propose a phrase-based approach to generate well-structured and concise presentation slides for academic papers. Our approach first extracts phrases from the given paper, and then learns both the saliency of each phrase and the hierarchical relationship between a pair of phrases. Finally a greedy algorithm is used to select and align the salient phrases in order to form the well-structured presentation slides. Evaluation results on a real dataset verify the efficacy of our proposed approach.", "title": "" }, { "docid": "864ab702d0b45235efe66cd9e3bc5e66", "text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.", "title": "" }, { "docid": "eede8e690991c27074a0485c7c046e17", "text": "We performed meta-analyses on 60 neuroimaging (PET and fMRI) studies of working memory (WM), considering three types of storage material (spatial, verbal, and object), three types of executive function (continuous updating of WM, memory for temporal order, and manipulation of information in WM), and interactions between material and executive function. Analyses of material type showed the expected dorsal-ventral dissociation between spatial and nonspatial storage in the posterior cortex, but not in the frontal cortex. Some support was found for left frontal dominance in verbal WM, but only for tasks with low executive demand. Executive demand increased right lateralization in the frontal cortex for spatial WM. Tasks requiring executive processing generally produce more dorsal frontal activations than do storage-only tasks, but not all executive processes show this pattern. Brodmann's areas (BAs) 6, 8, and 9, in the superior frontal cortex, respond most when WM must be continuously updated and when memory for temporal order must be maintained. Right BAs 10 and 47, in the ventral frontal cortex, respond more frequently with demand for manipulation (including dual-task requirements or mental operations). BA 7, in the posterior parietal cortex, is involved in all types of executive function. Finally, we consider a potential fourth executive function: selective attention to features of a stimulus to be stored in WM, which leads to increased probability of activating the medial prefrontal cortex (BA 32) in storage tasks.", "title": "" }, { "docid": "e55067bddff5f7f3cb646d02342f419c", "text": "Over the last two decades there have been several process models proposed (and used) for data and information fusion. A common theme of these models is the existence of multiple levels of processing within the data fusion process. In the 1980’s three models were adopted: the intelligence cycle, the JDL model and the Boyd control. The 1990’s saw the introduction of the Dasarathy model and the Waterfall model. However, each of these models has particular advantages and disadvantages. A new model for data and information fusion is proposed. This is the Omnibus model, which draws together each of the previous models and their associated advantages whilst managing to overcome some of the disadvantages. Where possible the terminology used within the Omnibus model is aimed at a general user of data fusion technology to allow use by a distributed audience.", "title": "" }, { "docid": "f4239b2be54e80666bd21d8c50a6b1b0", "text": "Limited work has examined how self-affirmation might lead to positive outcomes beyond the maintenance of a favorable self-image. To address this gap in the literature, we conducted two studies in two cultures to establish the benefits of self-affirmation for psychological well-being. In Study 1, South Korean participants who affirmed their values for 2 weeks showed increased eudaimonic well-being (need satisfaction, meaning, and flow) relative to control participants. In Study 2, U.S. participants performed a self-affirmation activity for 4 weeks. Extending Study 1, after 2 weeks, self-affirmation led both to increased eudaimonic well-being and hedonic well-being (affect balance). By 4 weeks, however, these effects were non-linear, and the increases in affect balance were only present for vulnerable participants-those initially low in eudaimonic well-being. In sum, the benefits of self-affirmation appear to extend beyond self-protection to include two types of well-being.", "title": "" }, { "docid": "8c214f081f47e12d4dccd71b6038d3bf", "text": "Switched reluctance machines (SRMs) are considered as serious candidates for starter/alternator (S/A) systems in more electric cars. Robust performance in the presence of high temperature, safe operation, offering high efficiency, and a very long constant power region, along with a rugged structure contribute to their suitability for this high impact application. To enhance these qualities, we have developed key technologies including sensorless operation over the entire speed range and closed-loop torque and speed regulation. The present paper offers an in-depth analysis of the drive dynamics during motoring and generating modes of operation. These findings will be used to explain our control strategies in the context of the S/A application. Experimental and simulation results are also demonstrated to validate the practicality of our claims.", "title": "" }, { "docid": "cb561e56e60ba0e5eef2034158c544c2", "text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.", "title": "" }, { "docid": "dc610cdd3c6cc5ae443cf769bd139e78", "text": "With modern smart phones and powerful mobile devices, Mobile apps provide many advantages to the community but it has also grown the demand for online availability and accessibility. Cloud computing is provided to be widely adopted for several applications in mobile devices. However, there are many advantages and disadvantages of using mobile applications and cloud computing. This paper focuses in providing an overview of mobile cloud computing advantages, disadvantages. The paper discusses the importance of mobile cloud applications and highlights the mobile cloud computing open challenges", "title": "" }, { "docid": "16cac565c6163db83496c41ea98f61f9", "text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.", "title": "" }, { "docid": "fcfc16b94f06bf6120431a348e97b9ac", "text": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.", "title": "" } ]
scidocsrr
f45440e73526700aa7fc7bca4a71b282
Understanding student learning trajectories using multimodal learning analytics within an embodied-interaction learning environment
[ { "docid": "5b55b1c913aa9ec461c6c51c3d00b11b", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" }, { "docid": "892c75c6b719deb961acfe8b67b982bb", "text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.", "title": "" } ]
[ { "docid": "382eb7a0e8bc572506a40bf3cbe6fd33", "text": "The long-term ambition of the Tactile Internet is to enable a democratization of skill, and how it is being delivered globally. An integral part of this is to be able to transmit touch in perceived real-time, which is enabled by suitable robotics and haptics equipment at the edges, along with an unprecedented communications network. The fifth generation (5G) mobile communications systems will underpin this emerging Internet at the wireless edge. This paper presents the most important technology concepts, which lay at the intersection of the larger Tactile Internet and the emerging 5G systems. The paper outlines the key technical requirements and architectural approaches for the Tactile Internet, pertaining to wireless access protocols, radio resource management aspects, next generation core networking capabilities, edge-cloud, and edge-AI capabilities. The paper also highlights the economic impact of the Tactile Internet as well as a major shift in business models for the traditional telecommunications ecosystem.", "title": "" }, { "docid": "3e5d887ff00e4eff8e408e6d51d747b2", "text": "We present a small object sensitive method for object detection. Our method is built based on SSD (Single Shot MultiBox Detector (Liu et al. 2016)), a simple but effective deep neural network for image object detection. The discrete nature of anchor mechanism used in SSD, however, may cause misdetection for the small objects located at gaps between the anchor boxes. SSD performs better for small object detection after circular shifts of the input image. Therefore, auxiliary feature maps are generated by conducting circular shifts over lower extra feature maps in SSD for small-object detection, which is equivalent to shifting the objects in order to fit the locations of anchor boxes. We call our proposed system Shifted SSD. Moreover, pinpoint accuracy of localization is of vital importance to small objects detection. Hence, two novel methods called Smooth NMS and IoU-Prediction module are proposed to obtain more precise locations. Then for video sequences, we generate trajectory hypothesis to obtain predicted locations in a new frame for further improved performance. Experiments conducted on PASCAL VOC 2007, along with MS COCO, KITTI and our small object video datasets, validate that both mAP and recall are improved with different degrees and the speed is almost the same as SSD.", "title": "" }, { "docid": "79fa1a6ec5490e80909b7cabc37e32aa", "text": "Face identification and detection has become very popular, interesting and wide field of current research area. As there are several algorithms for face detection exist but none of the algorithms globally detect all sorts of human faces among the different colors and intensities in a given picture. In this paper, a novel method for face detection technique has been described. Here, the centers of both the eyes are detected using generic eye template matching method. After detecting the center of both the eyes, the corresponding face bounding box is determined. The experimental results have shown that the proposed algorithm is able to accomplish successfully proper detection and to mark the exact face and eye region in the given image.", "title": "" }, { "docid": "f1a7bcd681969d5a5167d1b0397af13a", "text": "The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot).", "title": "" }, { "docid": "57ca7842e7ab21b51c4069e76121fc26", "text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.", "title": "" }, { "docid": "9e3d3783aa566b50a0e56c71703da32b", "text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.", "title": "" }, { "docid": "6dbf49c714f6e176273317d4274b93de", "text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.", "title": "" }, { "docid": "17a1f03485b74ba0f1efd76e118e2b7a", "text": "DISC Measure, Squeezer, Categorical Data Clustering, Cosine similarity References Rishi Sayal and Vijay Kumar. V. 2011. A novel Similarity Measure for Clustering Categorical Data Sets. International Journal of Computer Application (0975-8887). Aditya Desai, Himanshu Singh and Vikram Pudi. 2011. DISC Data-Intensive Similarity Measure for Categorical Data. Pacific-Asia Conferences on Knowledge Discovery Data Mining. Shyam Boriah, Varun Chandola and Vipin Kumar. 2008. Similarity Measure for Clustering Categorical Data. Comparative Evaluation. SIAM International Conference on Data Mining-SDM. Taoying Li, Yan Chen. 2009. Fuzzy Clustering Ensemble Algorithm for partitional Categorical Data. IEEE, International conference on Business Intelligence and Financial Engineering.", "title": "" }, { "docid": "61fb62e6979789f5f465a41d46f62c57", "text": "Previously, ANSI/IEEE relay current transformer (CT) sizing criteria were based on traditional symmetrical calculations that are usually discussed by technical articles and manufacturers' guidelines. In 1996, IEEE Standard C37.110-1996 introduced (1+X/R) offset multiplying, current asymmetry, and current distortion factors, officially changing the CT sizing guideline. A critical concern is the performance of fast protective schemes (instantaneous or differential elements) during severe saturation of low-ratio CTs. Will the instantaneous element operate before the upstream breaker relay trips? Will the differential element misoperate for out-of-zone faults? The use of electromagnetic and analog relay technology does not assure selectivity. Modern microprocessor relays introduce additional uncertainty into the design/verification process with different sampling techniques and proprietary sensing/recognition/trip algorithms. This paper discusses the application of standard CT accuracy classes with modern ANSI/IEEE CT calculation methodology. This paper is the first of a two-part series; Part II provides analytical waveform analysis discussions to illustrate the concepts conveyed in Part I", "title": "" }, { "docid": "9c43da9473facdecda86442e157736db", "text": "The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices’ capacity. What is worse, app service providers need to collect and utilize a large volume of users’ data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83, 10−6)-differential privacy is guaranteed, the compact model trained by RONA can obtain 20× compression ratio and 19× speed-up with merely 0.97% accuracy loss.", "title": "" }, { "docid": "11de13e5347ee392b6535fe4b55eed24", "text": "The requirement for new flexible adaptive grippers is the ability to detect and recognize objects in their environments. It is known that robotic manipulators are highly nonlinear systems, and an accurate mathematical model is difficult to obtain, thus making it difficult no control using conventional techniques. Here, a novel design of an adaptive neuro fuzzy inference strategy (ANFIS) for controlling input displacement of a new adaptive compliant gripper is presented. This design of the gripper has embedded sensors as part of its structure. The use of embedded sensors in a robot gripper gives the control system the ability to control input displacement of the gripper and to recognize particular shapes of the grasping objects. Since the conventional control strategy is a very challenging task, fuzzy logic based controllers are considered as potential candidates for such an application. Fuzzy based controllers develop a control signal which yields on the firing of the rule base. The selection of the proper rule base depending on the situation can be achieved by using an ANFIS controller, which becomes an integrated method of approach for the control purposes. In the designed ANFIS scheme, neural network techniques are used to select a proper rule base, which is achieved using the back propagation algorithm. The simulation results presented in this paper show the effectiveness of the developed method. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "351562a44f9126db2f48e2760e26af4e", "text": "It has been widely observed that there is no single “dominant” SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use socalled empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.", "title": "" }, { "docid": "95b2219dc34de9f0fe40e84e6df8a1e3", "text": "Most computer vision and especially segmentation tasks require to extract features that represent local appearance of patches. Relevant features can be further processed by learning algorithms to infer posterior probabilities that pixels belong to an object of interest. Deep Convolutional Neural Networks (CNN) define a particularly successful class of learning algorithms for semantic segmentation, although they proved to be very slow to train even when employing special purpose hardware. We propose, for the first time, a general purpose segmentation algorithm to extract the most informative and interpretable features as convolution kernels while simultaneously building a multivariate decision tree. The algorithm trains several orders of magnitude faster than regular CNNs and achieves state of the art results in processing quality on benchmark datasets.", "title": "" }, { "docid": "5673bc2ca9f08516f14485ef8bbba313", "text": "Analog-to-digital converters are essential building blocks in modern electronic systems. They form the critical link between front-end analog transducers and back-end digital computers that can efficiently implement a wide variety of signal-processing functions. The wide variety of digitalsignal-processing applications leads to the availability of a wide variety of analog-to-digital (A/D) converters of varying price, performance, and quality. Ideally, an A/D converter encodes a continuous-time analog input voltage, VIN , into a series of discrete N -bit digital words that satisfy the relation", "title": "" }, { "docid": "3ffe3cf44eb79a9560a873de774ecc67", "text": "Gummy smile constitutes a relatively frequent aesthetic alteration characterized by excessive exhibition of the gums during smiling movements of the upper lip. It is the result of an inadequate relation between the lower edge of the upper lip, the positioning of the anterosuperior teeth, the location of the upper jaw, and the gingival margin position with respect to the dental crown. Altered Passive Eruption (APE) is a clinical situation produced by excessive gum overlapping over the enamel limits, resulting in a short clinical crown appearance, that gives the sensation of hidden teeth. The term itself suggests the causal mechanism, i.e., failure in the passive phase of dental eruption, though there is no scientific evidence to support this. While there are some authors who consider APE to be a risk situation for periodontal health, its clearest clinical implication refers to oral esthetics. APE is a factor that frequently contributes to the presence of a gummy or gingival smile, and it can easily be corrected by periodontal surgery. Nevertheless, it is essential to establish a correct differential diagnosis and good treatment plan. A literature review is presented of the dental eruption process, etiological hypotheses of APE, its morphologic classification, and its clinical relevance.", "title": "" }, { "docid": "a39c0db041f31370135462af467426ed", "text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.", "title": "" }, { "docid": "0bce954374d27d4679eb7562350674fc", "text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.", "title": "" }, { "docid": "1e852e116c11a6c7fb1067313b1ffaa3", "text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013", "title": "" }, { "docid": "b2a2fdf56a79c1cb82b8b3a55b9d841d", "text": "This paper describes the architecture and implementation of a shortest path processor, both in reconfigurable hardware and VLSI. This processor is based on the principles of recurrent spatiotemporal neural network. The processor’s operation is similar to Dijkstra’s algorithm and it can be used for network routing calculations. The objective of the processor is to find the least cost path in a weighted graph between a given node and one or more destinations. The digital implementation exhibits a regular interconnect structure and uses simple processing elements, which is well suited for VLSI implementation and reconfigurable hardware.", "title": "" } ]
scidocsrr
60e243933965c060e595ee144ae77075
25 Tweets to Know You: A New Model to Predict Personality with Social Media
[ { "docid": "fdc4efad14d79f1855dddddb6a30ace6", "text": "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase 'sick of' and the word 'depressed'), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive 'my' when mentioning their 'wife' or 'girlfriend' more often than females use 'my' with 'husband' or 'boyfriend'). To date, this represents the largest study, by an order of magnitude, of language and personality.", "title": "" }, { "docid": "b12d3dfe42e5b7ee06821be7dcd11ab9", "text": "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.", "title": "" } ]
[ { "docid": "f63990edcaa77454126e968eba3d8435", "text": "The OECD's Brain and Learning project (2002) emphasized that many misconceptions about the brain exist among professionals in the field of education. Though these so-called \"neuromyths\" are loosely based on scientific facts, they may have adverse effects on educational practice. The present study investigated the prevalence and predictors of neuromyths among teachers in selected regions in the United Kingdom and the Netherlands. A large observational survey design was used to assess general knowledge of the brain and neuromyths. The sample comprised 242 primary and secondary school teachers who were interested in the neuroscience of learning. It would be of concern if neuromyths were found in this sample, as these teachers may want to use these incorrect interpretations of neuroscience findings in their teaching practice. Participants completed an online survey containing 32 statements about the brain and its influence on learning, of which 15 were neuromyths. Additional data was collected regarding background variables (e.g., age, sex, school type). Results showed that on average, teachers believed 49% of the neuromyths, particularly myths related to commercialized educational programs. Around 70% of the general knowledge statements were answered correctly. Teachers who read popular science magazines achieved higher scores on general knowledge questions. More general knowledge also predicted an increased belief in neuromyths. These findings suggest that teachers who are enthusiastic about the possible application of neuroscience findings in the classroom find it difficult to distinguish pseudoscience from scientific facts. Possessing greater general knowledge about the brain does not appear to protect teachers from believing in neuromyths. This demonstrates the need for enhanced interdisciplinary communication to reduce such misunderstandings in the future and establish a successful collaboration between neuroscience and education.", "title": "" }, { "docid": "8fffe94d662d46b977e0312dc790f4a4", "text": "Airline companies have increasingly employed electronic commerce (eCommerce) for strategic purposes, most notably in order to achieve long-term competitive advantage and global competitiveness by enhancing customer satisfaction as well as marketing efficacy and managerial efficiency. eCommerce has now emerged as possibly the most representative distribution channel in the airline industry. In this study, we describe an extended technology acceptance model (TAM), which integrates subjective norms and electronic trust (eTrust) into the model, in order to determine their relevance to the acceptance of airline business-to-customer (B2C) eCommerce websites (AB2CEWS). The proposed research model was tested empirically using data collected from a survey of customers who had utilized B2C eCommerce websites of two representative airline companies in South Korea (i.e., KAL and ASIANA) for the purpose of purchasing air tickets. Path analysis was employed in order to assess the significance and strength of the hypothesized causal relationships between subjective norms, eTrust, perceived ease of use, perceived usefulness, attitude toward use, and intention to reuse. Our results provide general support for an extended TAM, and also confirmed its robustness in predicting customers’ intention to reuse AB2CEWS. Valuable information was found from our results regarding the management of AB2CEWS in the formulation of airlines’ Internet marketing strategies. 2008 Published by Elsevier Ltd.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" }, { "docid": "25c815f5fc0cf87bdef5e069cbee23a8", "text": "This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.", "title": "" }, { "docid": "89c992c7dbe37dc9d08a25fd62c09e1a", "text": "Research into antigay violence has been limited by a lack of attention to issues of gender presentation. Understanding gender nonconformity is important for addressing antigay prejudice and hate crimes. We assessed experiences of gender-nonconformity-related prejudice among 396 Black, Latino, and White lesbian, gay, and bisexual individuals recruited from diverse community venues in New York City. We assessed the prevalence and contexts of prejudice-related life events and everyday discrimination using both quantitative and qualitative approaches. Gender nonconformity had precipitated major prejudice events for 9% of the respondents and discrimination instances for 19%. Women were more likely than men to report gender-nonconformity-related discrimination but there were no differences by other demographic characteristics. In analysis of events narratives, we show that gender nonconformity prejudice is often intertwined with antigay prejudice. Our results demonstrate that both constructs should be included when addressing prejudice and hate crimes targeting lesbian, gay, bisexual, and transgender individuals and communities.", "title": "" }, { "docid": "8c0d50acd23e4995c4717ef049708a1c", "text": "What do you do to start reading introduction to computing and programming in python a multimedia approach 2nd edition? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this introduction to computing and programming in python a multimedia approach 2nd edition.", "title": "" }, { "docid": "279de90035c16de3f3acfcd4f352a3c9", "text": "Purpose – To develop a model that bridges the gap between CSR definitions and strategy and offers guidance to managers on how to connect socially committed organisations with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. Design/methodology/approach – This paper offers a critical evaluation of the theoretical foundations of corporate responsibility (CR) and proposes a new strategic approach to CR, which seeks to overcome the limitations of normative definitions. To address this perceived issue, the authors propose a new processual model of CR, which they refer to as the 3C-SR model. Findings – The 3C-SR model can offer practical guidelines to managers on how to connect with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. It is argued that many of the redefinitions of CR for a contemporary audience are normative exhortations (“calls to arms”) that fail to provide managers with the conceptual resources to move from “ought” to “how”. Originality/value – The 3C-SR model offers a novel approach to CR in so far as it addresses strategy, operations and markets in a single framework.", "title": "" }, { "docid": "54f2ad8bb43cf1705c2258b779397eb6", "text": "This paper presents a compact planar ultra-wideband (UWB) microstrip antenna for microwave medical applications. The proposed antenna has a low profile structure, consisting of a modified radiating patch with stair steps and open slots, microstrip feed line, and T-like shape slots at the ground plane. The optimized antenna is capable of being operated in frequency range of 3.06–11.4 GHz band having good omnidirectional radiation pattern and high gain, which satisfies the requirements of UWB (3.1–10.6 GHz) applications. The antenna system has a compact size of 18×30×0.8mm3. These features make the proposed UWB antenna a good candidate for microwave medical imaging applications.", "title": "" }, { "docid": "8c34f43e7d3f760173257fbbc58c22ca", "text": "High voltage pulse generators can be used effectively in water treatment applications, as applying a pulsed electric field on the infected sample guarantees killing of harmful germs and bacteria. In this paper, a new high voltage pulse generator with closed loop control on its output voltage is proposed. The proposed generator is based on DC-to-DC boost converter in conjunction with capacitor-diode voltage multiplier (CDVM), and can be fed from low-voltage low-frequency AC supply, i.e. utility mains. The proposed topology provides transformer-less operation which reduces size and enhances the overall efficiency. A Detailed design of the proposed pulse generator has been presented as well. The proposed approach is validated by simulation as well as experimental results.", "title": "" }, { "docid": "622b0d9526dfee6abe3a605fa83e92ed", "text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.", "title": "" }, { "docid": "9039058c93aeaa99dae15617e5032b33", "text": "Data sparsity is one of the most challenging problems for recommender systems. One promising solution to this problem is cross-domain recommendation, i.e., leveraging feedbacks or ratings from multiple domains to improve recommendation performance in a collective manner. In this paper, we propose an Embedding and Mapping framework for Cross-Domain Recommendation, called EMCDR. The proposed EMCDR framework distinguishes itself from existing crossdomain recommendation models in two aspects. First, a multi-layer perceptron is used to capture the nonlinear mapping function across domains, which offers high flexibility for learning domain-specific features of entities in each domain. Second, only the entities with sufficient data are used to learn the mapping function, guaranteeing its robustness to noise caused by data sparsity in single domain. Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms stateof-the-art cross-domain recommendation methods.", "title": "" }, { "docid": "9e31cedf404c989d15a2f06c5800f207", "text": "For automatic driving, vehicles must be able to recognize their environment and take control of the vehicle. The vehicle must perceive relevant objects, which includes other traffic participants as well as infrastructure information, assess the situation and generate appropriate actions. This work is a first step of integrating previous works on environment perception and situation analysis toward automatic driving strategies. We present a method for automatic cruise control of vehicles in urban environments. The longitudinal velocity is influenced by the speed limit, the curvature of the lane, the state of the next traffic light and the most relevant target on the current lane. The necessary acceleration is computed in respect to the information which is estimated by an instrumented vehicle.", "title": "" }, { "docid": "60b3460f1ae554c6d24b9b982484d0c1", "text": "Archaeological remote sensing is not a novel discipline. Indeed, there is already a suite of geoscientific techniques that are regularly used by practitioners in the field, according to standards and best practice guidelines. However, (i) the technological development of sensors for data capture; (ii) the accessibility of new remote sensing and Earth Observation data; and (iii) the awareness that a combination of different techniques can lead to retrieval of diverse and complementary information to characterize landscapes and objects of archaeological value and significance, are currently three triggers stimulating advances in methodologies for data acquisition, signal processing, and the integration and fusion of extracted information. The Special Issue “Remote Sensing and Geosciences for Archaeology” therefore presents a collection of scientific contributions that provides a sample of the state-of-the-art and forefront research in this field. Site discovery, understanding of cultural landscapes, augmented knowledge of heritage, condition assessment, and conservation are the main research and practice targets that the papers published in this Special Issue aim to address.", "title": "" }, { "docid": "80ccc8b5f9e68b5130a24fe3519b9b62", "text": "A MIMO antenna of size 40mm × 40mm × 1.6mm is proposed for WLAN applications. Antenna consists of four mushroom shaped Apollonian fractal planar monopoles having micro strip feed lines with edge feeding. It uses defective ground structure (DGS) to achieve good isolation. To achieve more isolation, the antenna elements are placed orthogonal to each other. Further, isolation can be increased using parasitic elements between the elements of antenna. Simulation is done to study reflection coefficient as well as coupling between input ports, directivity, peak gain, efficiency, impedance and VSWR. Results show that MIMO antenna has a bandwidth of 1.9GHZ ranging from 5 to 6.9 GHz, and mutual coupling of less than -20dB.", "title": "" }, { "docid": "a6c3a4dfd33eb902f5338f7b8c7f78e5", "text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.", "title": "" }, { "docid": "06bfa716dd067d05229c92dc66757772", "text": "Although many critics are reluctant to accept the trustworthiness of qualitative research, frameworks for ensuring rigour in this form of work have been in existence for many years. Guba’s constructs, in particular, have won considerable favour and form the focus of this paper. Here researchers seek to satisfy four criteria. In addressing credibility, investigators attempt to demonstrate that a true picture of the phenomenon under scrutiny is being presented. To allow transferability, they provide sufficient detail of the context of the fieldwork for a reader to be able to decide whether the prevailing environment is similar to another situation with which he or she is familiar and whether the findings can justifiably be applied to the other setting. The meeting of the dependability criterion is difficult in qualitative work, although researchers should at least strive to enable a future investigator to repeat the study. Finally, to achieve confirmability, researchers must take steps to demonstrate that findings emerge from the data and not their own predispositions. The paper concludes by suggesting that it is the responsibility of research methods teachers to ensure that this or a comparable model for ensuring trustworthiness is followed by students undertaking a qualitative inquiry.", "title": "" }, { "docid": "7e647cac9417bf70acd8c0b4ee0faa9b", "text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.", "title": "" }, { "docid": "ac3511f0a3307875dc49c26da86afcfb", "text": "With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach.", "title": "" }, { "docid": "b418470025d74d745e75225861a1ed7e", "text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.", "title": "" }, { "docid": "5415bb23210d1e0c370cf2ab0898affc", "text": "PURPOSE\nTo compare a developmental indirect resin composite with an established, microfilled directly placed resin composite used to restore severely worn teeth. The cause of the tooth wear was a combination of erosion and attrition.\n\n\nMATERIALS AND METHODS\nOver a 3-year period, a total of 32 paired direct or indirect microfilled resin composite restorations were placed on premolars and molars in 16 patients (mean age: 43 years, range: 25 to 62) with severe tooth wear. A further 26 pairs of resin composite were placed in 13 controls (mean age: 39 years, range 28 to 65) without evidence of tooth wear. The material was randomly selected for placement in the left or right sides of the mouth.\n\n\nRESULTS\nSixteen restorations were retained in the tooth wear group (7 indirect and 9 direct), 7 (22%) fractured (4 indirect and 3 direct), and 9 (28%) were completely lost (5 indirect and 4 direct). There was no statistically significant difference in failure rates between the materials in this group. The control group had 21 restorations (80%) that were retained (10 indirect and 12 direct), a significantly lower rate of failure than in the tooth wear patients (P = .027).\n\n\nCONCLUSION\nThe results of this short-term study suggest that the use of direct and indirect resin composites for restoring worn posterior teeth is contraindicated.", "title": "" } ]
scidocsrr
81099c920db32cea29cfb49c4efe9cd7
The effect of Gamified mHealth App on Exercise Motivation and Physical Activity
[ { "docid": "05d9a8471939217983c1e47525ff595e", "text": "BACKGROUND\nMobile phone health apps may now seem to be ubiquitous, yet much remains unknown with regard to their usage. Information is limited with regard to important metrics, including the percentage of the population that uses health apps, reasons for adoption/nonadoption, and reasons for noncontinuance of use.\n\n\nOBJECTIVE\nThe purpose of this study was to examine health app use among mobile phone owners in the United States.\n\n\nMETHODS\nWe conducted a cross-sectional survey of 1604 mobile phone users throughout the United States. The 36-item survey assessed sociodemographic characteristics, history of and reasons for health app use/nonuse, perceived effectiveness of health apps, reasons for stopping use, and general health status.\n\n\nRESULTS\nA little over half (934/1604, 58.23%) of mobile phone users had downloaded a health-related mobile app. Fitness and nutrition were the most common categories of health apps used, with most respondents using them at least daily. Common reasons for not having downloaded apps were lack of interest, cost, and concern about apps collecting their data. Individuals more likely to use health apps tended to be younger, have higher incomes, be more educated, be Latino/Hispanic, and have a body mass index (BMI) in the obese range (all P<.05). Cost was a significant concern among respondents, with a large proportion indicating that they would not pay anything for a health app. Interestingly, among those who had downloaded health apps, trust in their accuracy and data safety was quite high, and most felt that the apps had improved their health. About half of the respondents (427/934, 45.7%) had stopped using some health apps, primarily due to high data entry burden, loss of interest, and hidden costs.\n\n\nCONCLUSIONS\nThese findings suggest that while many individuals use health apps, a substantial proportion of the population does not, and that even among those who use health apps, many stop using them. These data suggest that app developers need to better address consumer concerns, such as cost and high data entry burden, and that clinical trials are necessary to test the efficacy of health apps to broaden their appeal and adoption.", "title": "" }, { "docid": "5e7a06213a32e0265dcb8bc11a5bb3f1", "text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.", "title": "" }, { "docid": "be08b71c9af0e27f4f932919c2aaa24b", "text": "Gamification is the \"use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A frequently used model for gamification is to equate an activity in the non-game context with points and have external rewards for reaching specified point thresholds. One significant problem with this model of gamification is that it can reduce the internal motivation that the user has for the activity, as it replaces internal motivation with external motivation. If, however, the game design elements can be made meaningful to the user through information, then internal motivation can be improved as there is less need to emphasize external rewards. This paper introduces the concept of meaningful gamification through a user-centered exploration of theories behind organismic integration theory, situational relevance, situated motivational affordance, universal design for learning, and player-generated content. A Brief Introduction to Gamification One definition of gamification is \"the use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A common implementation of gamification is to take the scoring elements of video games, such as points, levels, and achievements, and apply them to a work or educational context. While the term is relatively new, the concept has been around for some time through loyalty systems like frequent flyer miles, green stamps, and library summer reading programs. These gamification programs can increase the use of a service and change behavior, as users work toward meeting these goals to reach external rewards (Zichermann & Cunningham, 2011, p. 27). Gamification has met with significant criticism by those who study games. One problem is with the name. By putting the term \"game\" first, it implies that the entire activity will become an engaging experience, when, in reality, gamification typically uses only the least interesting part of a game the scoring system. The term \"pointsification\" has been suggested as a label for gamification systems that add nothing more than a scoring system to a non-game activity (Robertson, 2010). One definition of games is \"a form of play with goals and structure\" (Maroney, 2001); the points-based gamification focuses on the goals and leaves the play behind. Ian Bogost suggests the term be changed to \"exploitationware,\" as that is a better description of what is really going on (2011). The underlying message of these criticisms of gamification is that there are more effective ways than a scoring system to engage users. Another concern is that organizations getting involved with gamification are not aware of the potential long-term negative impact of gamification. Underlying the concept of gamification is motivation. People can be driven to do something because of internal or external motivation. A meta-analysis by Deci, Koestner, and Ryan of 128 studies that examined motivation in educational settings found that almost all forms of rewards (except for non-controlling verbal rewards) reduced internal motivation (2001). The implication of this is that once gamification is used to provide external motivation, the user's internal motivation decreases. If the organization starts using gamification based upon external rewards and then decides to stop the rewards program, that organization will be worse off than when it started as users will be less likely to return to the behavior without the external reward (Deci, Koestner & Ryan, 2001). In the book Gamification by Design, the authors claim that this belief in internal motivation over extrinsic rewards is unfounded, and gamification can be used for organizations to control the behavior of users by replacing those internal motivations with extrinsic rewards. They do admit, though, that \"once you start giving someone a reward, you have to keep her in that reward loop forever\" (Zichermann & Cunningham, 2011, p. 27). Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Further exploration of the meta-analysis of motivational literature in education found that if the task was already uninteresting, reward systems did not reduce internal motivation, as there was little internal motivation to start with. The authors concluded that \"the issue is how to facilitate people's understanding the importance of the activity to themselves and thus internalizing its regulation so they will be selfmotivated to perform it\" (2001, p. 15). The goal of this paper is to explore theories useful in user-centered gamification that is meaningful to the user and therefore does not depend upon external rewards. Organismic Integration Theory Organismic Integration Theory (OIT) is a sub-theory of self-determination theory out of the field of Education created by Deci and Ryan (2004). Self-determination theory is focused on what drives an individual to make choices without external influence. OIT explores how different types of external motivations can be integrated with the underlying activity into someone’s own sense of self. Rather than state that motivations are either internalized or not, this theory presents a continuum based upon how much external control is integrated along with the desire to perform the activity. If there is heavy external control provided with a reward, then aspects of that external control will be internalized as well, while if there is less external control that goes along with the adaptation of an activity, then the activity will be more self-regulated. External rewards unrelated to the activity are the least likely to be integrated, as the perception is that someone else is controlling the individual’s behavior. Rewards based upon gaining or losing status that tap into the ego create an introjected regulation of behavior, and while this can be intrinsically accepted, the controlling aspect of these rewards causes the loss of internal motivation. Allowing users to selfidentify with goals or groups that are meaningful is much more likely to produce autonomous, internalized behaviors, as the user is able to connect these goals to other values he or she already holds. A user who has fully integrated the activity along with his or her personal goals and needs is more likely to see the activity as positive than if there is external control integrated with the activity (Deci & Ryan, 2004). OIT speaks to the importance of creating a gamification system that is meaningful to the user, assuming that the goal of the system is to create long-term systemic change where the users feel positive about engaging in the non-game activity. On the other side, if too many external controls are integrated with the activity, the user can have negative feelings about engaging in the activity. To avoid negative feelings, the game-based elements of the activity need to be meaningful and rewarding without the need for external rewards. In order for these activities to be meaningful to a specific user, however, they have to be relevant to that user. Situational Relevance and Situated Motivational Affordance One of the key research areas in Library and Information Science has been about the concept of relevance as related to information retrieval. A user has an information need, and a relevant document is one that resolves some of that information need. The concept of relevance is important in determining the effectiveness of search tools and algorithms. Many research projects that have compared search tools looked at the same query posed to different systems, and then used judges to determine what was a \"relevant\" response to that query. This approach has been heavily critiqued, as there are many variables that affect if a user finds something relevant at that moment in his or her searching process. Schamber reviewed decades of research to find generalizable criteria that could be used to determine what is truly relevant to a query and came to the conclusion that the only way to know if something is relevant is to ask the user (1994). Two users with the same search query will have different information backgrounds, so that a document that is relevant for one user may not be relevant to another user. This concept of \"situational relevance\" is important when thinking about gamification. When someone else creates goals for a user, it is akin to an external judge deciding what is relevant to a query. Without involving the user, there is no way to know what goals are relevant to a user's background, interest, or needs. In a points-based gamification system, the goal of scoring points is less likely to be relevant to a user if the activity that the points measure is not relevant to that user. For example, in a hybrid automobile, the gamification systems revolve around conservation and the point system can reflect how much energy is being saved. If the concept of saving energy is relevant to a user, then a point system Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. based upon that concept will also be relevant to that user. If the user is not internally concerned with saving energy, then a gamification system based upon saving energy will not be relevant to that user. There may be other elements of the driving experience that are of interest to a user, so if each user can select what aspect of the driving experience is measured, more users will find the system to be relevant. By involving the user in the creation or customization of the gamification system, the user can select or create meaningful game elements and goals that fall in line with their own interests. A related theory out of Human-Computer Interaction that has been applied to gamification is “situated motivational affordance” (Deterding, 2011b). This model was designed to help gamification designers consider the context of each o", "title": "" } ]
[ { "docid": "0c6b6575616ad22dab5bac9c25907d36", "text": "Identifying students’ learning styles has several benefits such as making students aware of their strengths and weaknesses when it comes to learning and the possibility to personalize their learning environment to their learning styles. While there exist learning style questionnaires for identifying a student’s learning style, such questionnaires have several disadvantages and therefore, research has been conducted on automatically identifying learning styles from students’ behavior in a learning environment. Current approaches to automatically identify learning styles have an average precision between 66% and 77%, which shows the need for improvements in order to use such automatic approaches reliably in learning environments. In this paper, four computational intelligence algorithms (artificial neural network, genetic algorithm, ant colony system and particle swarm optimization) have been investigated with respect to their potential to improve the precision of automatic learning style identification. Each algorithm was evaluated with data from 75 students. The artificial neural network shows the most promising results with an average precision of 80.7%, followed by particle swarm optimization with an average precision of 79.1%. Improving the precision of automatic learning style identification allows more students to benefit from more accurate information about their learning styles as well as more accurate personalization towards accommodating their learning styles in a learning environment. Furthermore, teachers can have a better understanding of their students and be able to provide more appropriate interventions.", "title": "" }, { "docid": "8694f84e4e2bd7da1e678a3b38ccd447", "text": "This paper describes a general methodology for extracting attribute-value pairs from web pages. It consists of two phases: candidate generation, in which syntactically likely attribute-value pairs are annotated; and candidate filtering, in which semantically improbable annotations are removed. We describe three types of candidate generators and two types of candidate filters, all of which are designed to be massively parallelizable. Our methods can handle 1 billion web pages in less than 6 hours with 1,000 machines. The best generator and filter combination achieves 70% F-measure compared to a hand-annotated corpus.", "title": "" }, { "docid": "0c3b25e74497fb2b76c9943b1237979b", "text": "Massive  (Multiple-Input–Multiple-Output) is a wireless technology which aims to serve several different devices simultaneously in the same frequency band through spatial multiplexing, made possible by using a large number of antennas at the base station. e many antennas facilitates efficient beamforming, based on channel estimates acquired from uplink reference signals, which allows the base station to transmit signals exactly where they are needed. e multiplexing together with the array gain from the beamforming can increase the spectral efficiency over contemporary systems. One challenge of practical importance is how to transmit data in the downlink when no channel state information is available. When a device initially joins the network, prior to transmiing uplink reference signals that enable beamforming, it needs system information—instructions on how to properly function within the network. It is transmission of system information that is the main focus of this thesis. In particular, the thesis analyzes how the reliability of the transmission of system information depends on the available amount of diversity. It is shown how downlink reference signals, space-time block codes, and power allocation can be used to improve the reliability of this transmission. In order to estimate the uplink and downlink channels from uplink reference signals, which is imperative to ensure scalability in the number of base station antennas, massive  relies on channel reciprocity. is thesis shows that the principles of channel reciprocity can also be exploited by a jammer, a malicious transmier, aiming to disrupt legitimate communication between two devices. A heuristic scheme is proposed in which the jammer estimates the channel to a target device blindly, without any knowledge of the transmied legitimate signals, and subsequently beamforms noise towards the target. Under the same power constraint, the proposed jammer can disrupt the legitimate link more effectively than a conventional omnidirectional jammer in many cases.", "title": "" }, { "docid": "17ba29c670e744d6e4f9e93ceb109410", "text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.", "title": "" }, { "docid": "41ef29542308363b180aa7685330b905", "text": "We conducted a literature review on systems that track learning analytics data (e.g., resource use, time spent, assessment data, etc.) and provide a report back to students in the form of visualizations, feedback, or recommendations. This review included a rigorous article search process; 945 articles were identified in the initial search. After filtering out articles that did not meet the inclusion criteria, 94 articles were included in the final analysis. Articles were coded on five categories chosen based on previous work done in this area: functionality, data sources, design analysis, perceived effects, and actual effects. The purpose of this review is to identify trends in the current student-facing learning analytics reporting system literature and provide recommendations for learning analytics researchers and practitioners for future work.", "title": "" }, { "docid": "6dddd252eec80ec4f3535a82e25809cf", "text": "The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.", "title": "" }, { "docid": "5d0cdaf761922ef5caab3b00986ba87c", "text": "OBJECTIVE\nWe have previously reported an automated method for within-modality (e.g., PET-to-PET) image alignment. We now describe modifications to this method that allow for cross-modality registration of MRI and PET brain images obtained from a single subject.\n\n\nMETHODS\nThis method does not require fiducial markers and the user is not required to identify common structures on the two image sets. To align the images, the algorithm seeks to minimize the standard deviation of the PET pixel values that correspond to each MRI pixel value. The MR images must be edited to exclude nonbrain regions prior to using the algorithm.\n\n\nRESULTS AND CONCLUSION\nThe method has been validated quantitatively using data from patients with stereotaxic fiducial markers rigidly fixed in the skull. Maximal three-dimensional errors of < 3 mm and mean three-dimensional errors of < 2 mm were measured. Computation time on a SPARCstation IPX varies from 3 to 9 min to align MR image sets with [18F]fluorodeoxyglucose PET images. The MR alignment with noisy H2(15)O PET images typically requires 20-30 min.", "title": "" }, { "docid": "7a4bb28ae7c175a018b278653e32c3a1", "text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.", "title": "" }, { "docid": "8e5a0b0310fc77b5ca618c5b7e924d64", "text": "Network analysis has an increasing role in our effort to understand the complexity of biological systems. This is because of our ability to generate large data sets, where the interaction or distance between biological components can be either measured experimentally or calculated. Here we describe the use of BioLayout Express3D, an application that has been specifically designed for the integration, visualization and analysis of large network graphs derived from biological data. We describe the basic functionality of the program and its ability to display and cluster large graphs in two- and three-dimensional space, thereby rendering graphs in a highly interactive format. Although the program supports the import and display of various data formats, we provide a detailed protocol for one of its unique capabilities, the network analysis of gene expression data and a more general guide to the manipulation of graphs generated from various other data types.", "title": "" }, { "docid": "921b4ecaed69d7396285909bd53a3790", "text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.", "title": "" }, { "docid": "4d0889329f9011adc05484382e4f5dc0", "text": "A high level of sustained personal plaque control is fundamental for successful treatment outcomes in patients with active periodontal disease and, hence, oral hygiene instructions are the cornerstone of periodontal treatment planning. Other risk factors for periodontal disease also should be identified and modified where possible. Many restorative dental treatments in particular require the establishment of healthy periodontal tissues for their clinical success. Failure by patients to control dental plaque because of inappropriate designs and materials for restorations and prostheses will result in the long-term failure of the restorations and the loss of supporting tissues. Periodontal treatment planning considerations are also very relevant to endodontic, orthodontic and osseointegrated dental implant conditions and proposed therapies.", "title": "" }, { "docid": "1705ba479a7ff33eef46e0102d4d4dd0", "text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.", "title": "" }, { "docid": "c52c6c70ffda274af6a32ed5d1316f08", "text": "Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1− β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1 − β. Our method involves the solution of tractable conic programs of moderate size. Notation For a finite set X = {1, . . . , X}, M(X ) denotes the probability simplex in R . An X -valued random variable χ has distribution m ∈ M(X ), denoted by χ ∼ m, if P(χ = x) = mx for all x ∈ X . By default, all vectors are column vectors. We denote by ek the kth canonical basis vector, while e denotes the vector whose components are all ones. In both cases, the dimension will usually be clear from the context. For square matrices A and B, the relation A B indicates that the matrix A − B is positive semidefinite. We denote the space of symmetric n × n matrices by S. The declaration f : X c 7→ Y (f : X a 7→ Y ) implies that f is a continuous (affine) function from X to Y . For a matrix A, we denote its ith row by Ai· (a row vector) and its jth column by A·j .", "title": "" }, { "docid": "3afea784f4a9eb635d444a503266d7cd", "text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.", "title": "" }, { "docid": "7218d7f8fb8791ab35e878eb61ea92e7", "text": "We present a novel approach for vision-based road direction detection for autonomous Unmanned Ground Vehicles (UGVs). The proposed method utilizes only monocular vision information similar to human perception to detect road directions with respect to the vehicle. The algorithm searches for a global feature of the roads due to perspective projection (so-called vanishing point) to distinguish road directions. The proposed approach consists of two stages. The first stage estimates the vanishing-point locations from single frames. The second stage uses a Rao-Blackwellised particle filter to track initial vanishing-point estimations over a sequence of images in order to provide more robust estimation. Simultaneously, the direction of the road ahead of the vehicle is predicted, which is prerequisite information for vehicle steering and path planning. The proposed approach assumes minimum prior knowledge about the environment and can cope with complex situations such as ground cover variations, different illuminations, and cast shadows. Its performance is evaluated on video sequences taken during test run of the DARPA Grand Challenge.", "title": "" }, { "docid": "d2304dae0f99bf5e5b46d4ceb12c0d69", "text": "The ultimate goal of this indoor mapping research is to automatically reconstruct a floorplan simply by walking through a house with a smartphone in a pocket. This paper tackles this problem by proposing FloorNet, a novel deep neural architecture. The challenge lies in the processing of RGBD streams spanning a large 3D space. FloorNet effectively processes the data through three neural network branches: 1) PointNet with 3D points, exploiting the 3D information; 2) CNN with a 2D point density image in a top-down view, enhancing the local spatial reasoning; and 3) CNN with RGB images, utilizing the full image information. FloorNet exchanges intermediate features across the branches to exploit the best of all the architectures. We have created a benchmark for floorplan reconstruction by acquiring RGBD video streams for 155 residential houses or apartments with Google Tango phones and annotating complete floorplan information. Our qualitative and quantitative evaluations demonstrate that the fusion of three branches effectively improves the reconstruction quality. We hope that the paper together with the benchmark will be an important step towards solving a challenging vector-graphics reconstruction problem. Code and data are available at https://github.com/art-programmer/FloorNet.", "title": "" }, { "docid": "90e02beb3d51c5d3715e3baab3056561", "text": "¶ Despite the widespread popularity of online opinion forums among consumers, the business value that such systems bring to organizations has, so far, remained an unanswered question. This paper addresses this question by studying the value of online movie ratings in forecasting motion picture revenues. First, we conduct a survey where a nationally representative sample of subjects who do not rate movies online is asked to rate a number of recent movies. Their ratings exhibit high correlation with online ratings for the same movies. We thus provide evidence for the claim that online ratings can be considered as a useful proxy for word-of-mouth about movies. Inspired by the Bass model of product diffusion, we then develop a motion picture revenue-forecasting model that incorporates the impact of both publicity and word of mouth on a movie's revenue trajectory. Using our model, we derive notably accurate predictions of a movie's total revenues from statistics of user reviews posted on Yahoo! Movies during the first week of a new movie's release. The results of our work provide encouraging evidence for the value of publicly available online forum information to firms for real-time forecasting and competitive analysis. ¶ This is a preliminary draft of a work in progress. It is being distributed to seminar participants for comments and discussion.", "title": "" }, { "docid": "68c7509ec0261b1ddccef7e3ad855629", "text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.", "title": "" }, { "docid": "a1f930147ad3c3ef48b6352e83d645d0", "text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.", "title": "" }, { "docid": "2130cc3df3443c912d9a38f83a51ab14", "text": "Event cameras, such as dynamic vision sensors (DVS), and dynamic and activepixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in endto-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car’s on-board diagnostics interface. As an example application, we performed a preliminary end-toend learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.", "title": "" } ]
scidocsrr
7cfc3eba155da58e4e5d2f350775f8e6
Spherical symmetric 3D local ternary patterns for natural, texture and biomedical image indexing and retrieval
[ { "docid": "b29caaa973e60109fbc2f68e0eb562a6", "text": "This correspondence introduces a new approach to characterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classification of twenty-five natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) reflected a specific scale and orientation sensitivity. Wavelet packet representations for twenty-five natural textures were classified without error by a simple two-layer network classifier. An analyzing function of large regularity ( 0 2 0 ) was shown to be slightly more efficient in representation and discrimination than a similar function with fewer vanishing moments (Ds) . In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classification without error for the twenty-five textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are beneficial for accomplishing segmentation, classification and subtle discrimination of texture.", "title": "" }, { "docid": "555c873864a484bc60c0b27fec44edd7", "text": "A new algorithm for medical image retrieval is presented in the paper. An 8-bit grayscale image is divided into eight binary bit-planes, and then binary wavelet transform (BWT) which is similar to the lifting scheme in real wavelet transform (RWT) is performed on each bitplane to extract the multi-resolution binary images. The local binary pattern (LBP) features are extracted from the resultant BWT sub-bands. Three experiments have been carried out for proving the effectiveness of the proposed algorithm. Out of which two are meant for medical image retrieval and one for face retrieval. It is further mentioned that the database considered for three experiments are OASIS magnetic resonance imaging (MRI) database, NEMA computer tomography (CT) database and PolyU-NIRFD face database. The results after investigation shows a significant improvement in terms of their evaluation measures as compared to LBP and LBP with Gabor transform.", "title": "" } ]
[ { "docid": "71034fd57c81f5787eb1642e24b44b82", "text": "A novel dual-band microstrip antenna with omnidirectional circularly polarized (CP) and unidirectional CP characteristic for each band is proposed in this communication. Function of dual-band dual-mode is realized based on loading with metamaterial structure. Since the fields of the fundamental modes are most concentrated on the fringe of the radiating patch, modifying the geometry of the radiating patch has little effect on the radiation patterns of the two modes (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0, + 1$</tex></formula> mode). CP property for the omnidirectional zeroth-order resonance (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0$</tex> </formula> mode) is achieved by employing curved branches in the radiating patch. Then a 45<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex> </formula> inclined rectangular slot is etched in the center of the radiating patch to excite the CP property for the <formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = + 1$</tex></formula> mode. A prototype is fabricated to verify the properties of the antenna. Both simulation and measurement results illustrate that this single-feed antenna is valuable in wireless communication for its low-profile, radiation pattern selectivity and CP characteristic.", "title": "" }, { "docid": "fd5e6dcb20280daad202f34cd940e7ce", "text": "Chapters cover topics in areas such as P and NP, space complexity, randomness, computational problems that are (or appear) infeasible to solve, pseudo-random generators, and probabilistic proof systems. The introduction nicely summarizes the material covered in the rest of the book and includes a diagram of dependencies between chapter topics. Initial chapters cover preliminary topics as preparation for the rest of the book. These are more than topical or historical summaries but generally not sufficient to fully prepare the reader for later material. Readers should approach this text already competent at undergraduate-level algorithms in areas such as basic analysis, algorithm strategies, fundamental algorithm techniques, and the basics for determining computability. Elective work in P versus NP or advanced analysis would be valuable but that isn‟t really required.", "title": "" }, { "docid": "a58c708051c728754a00fa77a54be83c", "text": "Vol. 44, No. 6, 2015 We developed a classroom observation protocol for quantitatively measuring student engagement in large university classes. The Behavioral Engagement Related to Instruction (BERI) protocol can be used to provide timely feedback to instructors as to how they can improve student engagement in their classrooms. We tested BERI on seven courses with different instructors and pedagogy. BERI achieved excellent interrater agreement (>95%) with a one-hour training session with new observers. It also showed consistent patterns of variation in engagement with instructor actions and classroom activity. Most notably, it showed that there was substantially higher engagement among the same group of students when interactive teaching methods were used compared with more traditional didactic methods. The same general variations in student engagement with instructional methods were present in all parts of the room and for different instructors. A New Tool for Measuring Student Behavioral Engagement in Large University Classes", "title": "" }, { "docid": "724845cb5c9f531e09f2c8c3e6f52fe4", "text": "Deep learning has given way to a new era of machine learning, apart from computer vision. Convolutional neural networks have been implemented in image classification, segmentation and object detection. Despite recent advancements, we are still in the very early stages and have yet to settle on best practices for network architecture in terms of deep design, small in size and a short training time. In this work, we propose a very deep neural network comprised of 16 Convolutional layers compressed with the Fire Module adapted from the SQUEEZENET model. We also call for the addition of residual connections to help suppress degradation. This model can be implemented on almost every neural network model with fully incorporated residual learning. This proposed model Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT Places365-Standard scene dataset. In our tests, the model performed with accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation accuracy while also enjoying a 23.86% reduction in training time and an 88.4% reduction in size. In our tests, this model was trained from scratch. Keywords— Convolutional Neural Networks; VGG16; Residual learning; Squeeze Neural Networks; Residual-Squeeze-VGG16; Scene Classification; ResSquVGG16.", "title": "" }, { "docid": "743104d53e9f9415366c2903020aa9e1", "text": "This paper provides a detailed analysis of a SOI CMOS tunable capacitor for antenna tuning. Design expressions for a switched capacitor network are given and quality factor of the whole network is expressed as a function of design parameters. Application to antenna aperture tuning is described by combining a 130 nm SOI CMOS tunable capacitor with a printed notch antenna. The proposed tunable multiband antenna can be tuned from 420 MHz to 790 MHz, with an associated radiation efficiency in the 33-73% range.", "title": "" }, { "docid": "7c291acaf26a61dc5155af21d12c2aaf", "text": "Recently, deep learning and deep neural networks have attracted considerable attention and emerged as one predominant field of research in the artificial intelligence community. The developed techniques have also gained widespread use in various domains with good success, such as automatic speech recognition, information retrieval and text classification, etc. Among them, long short-term memory (LSTM) networks are well suited to such tasks, which can capture long-range dependencies among words efficiently, meanwhile alleviating the gradient vanishing or exploding problem during training effectively. Following this line of research, in this paper we explore a novel use of a Siamese LSTM based method to learn more accurate document representation for text categorization. Such a network architecture takes a pair of documents with variable lengths as the input and utilizes pairwise learning to generate distributed representations of documents that can more precisely render the semantic distance between any pair of documents. In doing so, documents associated with the same semantic or topic label could be mapped to similar representations having a relatively higher semantic similarity. Experiments conducted on two benchmark text categorization tasks, viz. IMDB and 20Newsgroups, show that using a three-layer deep neural network based classifier that takes a document representation learned from the Siamese LSTM sub-networks as the input can achieve competitive performance in relation to several state-of-the-art methods.", "title": "" }, { "docid": "081dbece10d1363eca0ac01ce0260315", "text": "With the surge of mobile internet traffic, Cloud RAN (C-RAN) becomes an innovative architecture to help mobile operators maintain profitability and financial growth as well as to provide better services to the customers. It consists of Base Band Units (BBU) of several base stations, which are co-located in a secured place called Central Office and connected to Radio Remote Heads (RRH) via high bandwidth, low latency links. With BBU centralization in C-RAN, handover, the most important feature for mobile communications, could achieve simplified procedure or improved performance. In this paper, we analyze the handover performance of C-RAN over a baseline decentralized RAN (D-RAN) for GSM, UMTS and LTE systems. The results indicate that, lower total average handover interrupt time could be achieved in GSM thanks to the synchronous nature of handovers in C-RAN. For UMTS, inter-NodeB soft handover in D-RAN would become intra-pool softer handover in C-RAN. This brings some gains in terms of reduced signalling, less Iub transport bearer setup and reduced transport bandwidth requirement. For LTE X2-based inter-eNB handover, C-RAN could reduce the handover delay and to a large extent eliminate the risk of UE losing its connection with the serving cell while still waiting for the handover command, which in turn decrease the handover failure rate.", "title": "" }, { "docid": "99c944265ca0d5d9de5bf5855c6ad1f4", "text": "This study was designed to explore the impact of Yoga and Meditation based lifestyle intervention (YMLI) on cellular aging in apparently healthy individuals. During this 12-week prospective, open-label, single arm exploratory study, 96 apparently healthy individuals were enrolled to receive YMLI. The primary endpoints were assessment of the change in levels of cardinal biomarkers of cellular aging in blood from baseline to week 12, which included DNA damage marker 8-hydroxy-2'-deoxyguanosine (8-OH2dG), oxidative stress markers reactive oxygen species (ROS), and total antioxidant capacity (TAC), and telomere attrition markers telomere length and telomerase activity. The secondary endpoints were assessment of metabotrophic blood biomarkers associated with cellular aging, which included cortisol, β-endorphin, IL-6, BDNF, and sirtuin-1. After 12 weeks of YMLI, there were significant improvements in both the cardinal biomarkers of cellular aging and the metabotrophic biomarkers influencing cellular aging compared to baseline values. The mean levels of 8-OH2dG, ROS, cortisol, and IL-6 were significantly lower and mean levels of TAC, telomerase activity, β-endorphin, BDNF, and sirtuin-1 were significantly increased (all values p < 0.05) post-YMLI. The mean level of telomere length was increased but the finding was not significant (p = 0.069). YMLI significantly reduced the rate of cellular aging in apparently healthy population.", "title": "" }, { "docid": "cb46b6331371cf3b790ba2b10539f70e", "text": "The problem of matching measured latitude/longitude points to roads is becoming increasingly important. This paper describes a novel, principled map matching algorithm that uses a Hidden Markov Model (HMM) to find the most likely road route represented by a time-stamped sequence of latitude/longitude pairs. The HMM elegantly accounts for measurement noise and the layout of the road network. We test our algorithm on ground truth data collected from a GPS receiver in a vehicle. Our test shows how the algorithm breaks down as the sampling rate of the GPS is reduced. We also test the effect of increasing amounts of additional measurement noise in order to assess how well our algorithm could deal with the inaccuracies of other location measurement systems, such as those based on WiFi and cell tower multilateration. We provide our GPS data and road network representation as a standard test set for other researchers to use in their map matching work.", "title": "" }, { "docid": "003d004f57d613ff78bf39a35e788bf9", "text": "Breast cancer is one of the most common cancer in women worldwide. It is typically diagnosed via histopathological microscopy imaging, for which image analysis can aid physicians for more effective diagnosis. Given a large variability in tissue appearance, to better capture discriminative traits, images can be acquired at different optical magnifications. In this paper, we propose an approach which utilizes joint colour-texture features and a classifier ensemble for classifying breast histopathology images. While we demonstrate the effectiveness of the proposed framework, an important objective of this work is to study the image classification across different optical magnification levels. We provide interesting experimental results and related discussions, demonstrating a visible classification invariance with cross-magnification training-testing. Along with magnification-specific model, we also evaluate the magnification independent model, and compare the two to gain some insights.", "title": "" }, { "docid": "9a365e753817048ff149a5cd26885925", "text": "This paper presents an overview of the state of the art in reactive power compensation technologies. The principles of operation, design characteristics and application examples of Var compensators implemented with thyristors and self-commutated converters are presented. Static Var generators are used to improve voltage regulation, stability, and power factor in ac transmission and distribution systems. Examples obtained from relevant applications describing the use of reactive power compensators implemented with new static Var technologies are also described.", "title": "" }, { "docid": "4f743522e81cf89caf1b8c2134441409", "text": "In this paper, the attitude stabilization problem of an Octorotor with coaxial motors is studied. To this end, the new method of intelligent adaptive control is presented. The designed controller which includes fuzzy and PID controllers, is completed by resistant adaptive function of approximate external disturbance and changing in the dynamic model. In fact, the regulation factor of PID controller is done by the fuzzy logic system. At first, the Fuzzy-PID and PID controllers are simulated in MATLAB/Simulink. Then, the Fuzzy-PID controller is implemented on the Octorotor with coaxial motors as online auto-tuning. Also, LabVIEW software has been used for tests and the performance analysis of the controllers. All of this experimental operation is done in indoor environment in the presence of wind as disturbance in the hovering operation. All of these operations are real-time and telemetry wireless is done by network connection between the robot and ground station in the LABVIEW software. Finally, the controller efficiency and results are studied.", "title": "" }, { "docid": "a209be3245a8227bf82644ef98a2da16", "text": "Presentation-specifically, its use of elements from storytelling-is the next logical step in visualization research and should be a focus of at least equal importance with exploration and analysis.", "title": "" }, { "docid": "9c98dfb1e7df220edc4bc7cd57956b4b", "text": "In this paper we present MATISSE 2.0, a microscopic multi-agent based simulation system for the specification and execution of simulation scenarios for Agent-based intelligent Transportation Systems (ATS). In MATISSE, each smart traffic element (e.g., vehicle, intersection control device) is modeled as a virtual agent which continuously senses its surroundings and communicates and collaborates with other agents. MATISSE incorporates traffic control strategies such as contraflow operations and dynamic traffic sign changes. Experimental results show the ability of MATISSE 2.0 to simulate traffic scenarios with thousands of agents on a single PC.", "title": "" }, { "docid": "3266af647a3a85d256d42abc6f3eca55", "text": "This paper introduces a learning scheme to construct a Hilbert space (i.e., a vector space along its inner product) to address both unsupervised and semi-supervised domain adaptation problems. This is achieved by learning projections from each domain to a latent space along the Mahalanobis metric of the latent space to simultaneously minimizing a notion of domain variance while maximizing a measure of discriminatory power. In particular, we make use of the Riemannian optimization techniques to match statistical properties (e.g., first and second order statistics) between samples projected into the latent space from different domains. Upon availability of class labels, we further deem samples sharing the same label to form more compact clusters while pulling away samples coming from different classes. We extensively evaluate and contrast our proposal against state-of-the-art methods for the task of visual domain adaptation using both handcrafted and deep-net features. Our experiments show that even with a simple nearest neighbor classifier, the proposed method can outperform several state-of-the-art methods benefitting from more involved classification schemes.", "title": "" }, { "docid": "16932e01fdea801f28ec6c4194f70352", "text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.", "title": "" }, { "docid": "1994e427b1d00f1f64ed91559ffa5daa", "text": "We started investigating the collection of HTML tables on the Web and developed the WebTables system a few years ago [4]. Since then, our work has been motivated by applying WebTables in a broad set of applications at Google, resulting in several product launches. In this paper, we describe the challenges faced, lessons learned, and new insights that we gained from our efforts. The main challenges we faced in our efforts were (1) identifying tables that are likely to contain high-quality data (as opposed to tables used for navigation, layout, or formatting), and (2) recovering the semantics of these tables or signals that hint at their semantics. The result is a semantically enriched table corpus that we used to develop several services. First, we created a search engine for structured data whose index includes over a hundred million HTML tables. Second, we enabled users of Google Docs (through its Research Panel) to find relevant data tables and to insert such data into their documents as needed. Most recently, we brought WebTables to a much broader audience by using the table corpus to provide richer tabular snippets for fact-seeking web search queries on Google.com.", "title": "" }, { "docid": "595052e154117ce66202a1a82e0a4072", "text": "This paper presents the design of a new haptic feedback device for transradial myoelectric upper limb prosthesis that allows the amputee person to perceive the sensation of force-gripping and object-sliding. The system designed has three mechanical-actuator units to convey the sensation of force, and one vibrotactile unit to transmit the sensation of object sliding. The device designed will be placed on the user's amputee forearm. In order to validate the design of the structure, a stress analysis through Finite Element Method (FEM) is conducted.", "title": "" }, { "docid": "678d3dccdd77916d0c653d88785e1300", "text": "BACKGROUND\nFatigue is one of the common complaints of multiple sclerosis (MS) patients, and its treatment is relatively unclear. Ginseng is one of the herbal medicines possessing antifatigue properties, and its administration in MS for such a purpose has been scarcely evaluated. The purpose of this study was to evaluate the efficacy and safety of ginseng in the treatment of fatigue and the quality of life of MS patients.\n\n\nMETHODS\nEligible female MS patients were randomized in a double-blind manner, to receive 250-mg ginseng or placebo twice daily over 3 months. Outcome measures included the Modified Fatigue Impact Scale (MFIS) and the Iranian version of the Multiple Sclerosis Quality Of Life Questionnaire (MSQOL-54). The questionnaires were used after randomization, and again at the end of the study.\n\n\nRESULTS\nOf 60 patients who were enrolled in the study, 52 (86%) subjects completed the trial with good drug tolerance. Statistical analysis showed better effects for ginseng than the placebo as regards MFIS (p = 0.046) and MSQOL (p ≤ 0.0001) after 3 months. No serious adverse events were observed during follow-up.\n\n\nCONCLUSIONS\nThis study indicates that 3-month ginseng treatment can reduce fatigue and has a significant positive effect on quality of life. Ginseng is probably a good candidate for the relief of MS-related fatigue. Further studies are needed to shed light on the efficacy of ginseng in this field.", "title": "" } ]
scidocsrr
8c6fc852e3da449c0d2023434f4e7e03
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
[ { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "5dca1e55bd6475ff352db61580dec807", "text": "Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as “WAGE” to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.", "title": "" }, { "docid": "6fc6167d1ef6b96d239fea03b9653865", "text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.", "title": "" } ]
[ { "docid": "d35623e1c73a30c2879a1750df295246", "text": "Online human textual interaction often carries important emotional meanings inaccessible to computers. We propose an approach to textual emotion recognition in the context of computer-mediated communication. The proposed recognition approach works at the sentence level and uses the standard Ekman emotion classification. It is grounded in a refined keyword-spotting method that employs: a WordNet-based word lexicon, a lexicon of emoticons, common abbreviations and colloquialisms, and a set of heuristic rules. The approach is implemented through the Synesketch software system. Synesketch is published as a free, open source software library. Several Synesketch-based applications presented in the paper, such as the the emotional visual chat, stress the practical value of the approach. Finally, the evaluation of the proposed emotion recognition algorithm shows high accuracy and promising results for future research and applications.", "title": "" }, { "docid": "129e01910a1798c69d01d0642a4f6bf4", "text": "We show that Tobin's q, as proxied by the ratio of the firm's market value to its book value, increases with the firm's systematic equity risk and falls with the firm's unsystematic equity risk. Further, an increase in the firm's total equity risk is associated with a fall in q. The negative relation between the change in total risk and the change in q is robust through time for the whole sample, but it does not hold for the largest firms.", "title": "" }, { "docid": "5cc3d79d7bd762e8cfd9df658acae3fc", "text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.", "title": "" }, { "docid": "28ba4e921cb942c8022c315561abf526", "text": "Metamaterials have attracted more and more research attentions recently. Metamaterials for electromagnetic applications consist of sub-wavelength structures designed to exhibit particular responses to an incident EM (electromagnetic) wave. Traditional EM (electromagnetic) metamaterial is constructed from thick and rigid structures, with the form-factor suitable for applications only in higher frequencies (above GHz) in microwave band. In this paper, we developed a thin and flexible metamaterial structure with small-scale unit cell that gives EM metamaterials far greater flexibility in numerous applications. By incorporating ferrite materials, the thickness and size of the unit cell of metamaterials have been effectively scaled down. The design, mechanism and development of flexible ferrite loaded metamaterials for microwave applications is described, with simulation as well as measurements. Experiments show that the ferrite film with permeability of 10 could reduce the resonant frequency. The thickness of the final metamaterials is only 0.3mm. This type of ferrite loaded metamaterials offers opportunities for various sub-GHz microwave applications, such as cloaks, absorbers, and frequency selective surfaces.", "title": "" }, { "docid": "68c1cf9be287d2ccbe8c9c2ed675b39e", "text": "The primary task of the peripheral vasculature (PV) is to supply the organs and extremities with blood, which delivers oxygen and nutrients, and to remove metabolic waste products. In addition, peripheral perfusion provides the basis of local immune response, such as wound healing and inflammation, and furthermore plays an important role in the regulation of body temperature. To adequately serve its many purposes, blood flow in the PV needs to be under constant tight regulation, both on a systemic level through nervous and hormonal control, as well as by local factors, such as metabolic tissue demand and hydrodynamic parameters. As a matter of fact, the body does not retain sufficient blood volume to fill the entire vascular space, and only 25% of the capillary bed is in use during resting state. The importance of microvascular control is clearly illustrated by the disastrous effects of uncontrolled blood pooling in the extremities, such as occurring during certain types of shock. Peripheral vascular disease (PVD) is the general name for a host of pathologic conditions of disturbed PV function. Peripheral vascular disease includes occlusive diseases of the arteries and the veins. An example is peripheral arterial occlusive disease (PAOD), which is the result of a buildup of plaque on the inside of the arterial walls, inhibiting proper blood supply to the organs. Symptoms include pain and cramping in extremities, as well as fatigue; ultimately, PAOD threatens limb vitality. The PAOD is often indicative of atherosclerosis of the heart and brain, and is therefore associated with an increased risk of myocardial infarction or cerebrovascular accident (stroke). Venous occlusive disease is the forming of blood clots in the veins, usually in the legs. Clots pose a risk of breaking free and traveling toward the lungs, where they can cause pulmonary embolism. In the legs, thromboses interfere with the functioning of the venous valves, causing blood pooling in the leg (postthrombotic syndrome) that leads to swelling and pain. Other causes of disturbances in peripheral perfusion include pathologies of the autoregulation of the microvasculature, such as in Reynaud’s disease or as a result of diabetes. To monitor vascular function, and to diagnose and monitor PVD, it is important to be able to measure and evaluate basic vascular parameters, such as arterial and venous blood flow, arterial blood pressure, and vascular compliance. Many peripheral vascular parameters can be assessed with invasive or minimally invasive procedures. Examples are the use of arterial catheters for blood pressure monitoring and the use of contrast agents in vascular X ray imaging for the detection of blood clots. Although they are sensitive and accurate, invasive methods tend to be more cumbersome to use, and they generally bear a greater risk of adverse effects compared to noninvasive techniques. These factors, in combination with their usually higher cost, limit the use of invasive techniques as screening tools. Another drawback is their restricted use in clinical research because of ethical considerations. Although many of the drawbacks of invasive techniques are overcome by noninvasive methods, the latter typically are more challenging because they are indirect measures, that is, they rely on external measurements to deduce internal physiologic parameters. Noninvasive techniques often make use of physical and physiologic models, and one has to be mindful of imperfections in the measurements and the models, and their impact on the accuracy of results. Noninvasive methods therefore require careful validation and comparison to accepted, direct measures, which is the reason why these methods typically undergo long development cycles. Even though the genesis of many noninvasive techniques reaches back as far as the late nineteenth century, it was the technological advances of the second half of the twentieth century in such fields as micromechanics, microelectronics, and computing technology that led to the development of practical implementations. The field of noninvasive vascular measurements has undergone a developmental explosion over the last two decades, and it is still very much a field of ongoing research and development. This article describes the most important and most frequently used methods for noninvasive assessment of 234 PERIPHERAL VASCULAR NONINVASIVE MEASUREMENTS", "title": "" }, { "docid": "e8366d4e7f59fc32da001d3513cf8eee", "text": "Multiview LSA (MVLSA) is a generalization of Latent Semantic Analysis (LSA) that supports the fusion of arbitrary views of data and relies on Generalized Canonical Correlation Analysis (GCCA). We present an algorithm for fast approximate computation of GCCA, which when coupled with methods for handling missing values, is general enough to approximate some recent algorithms for inducing vector representations of words. Experiments across a comprehensive collection of test-sets show our approach to be competitive with the state of the art.", "title": "" }, { "docid": "724388aac829af9671a90793b1b31197", "text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.", "title": "" }, { "docid": "d3501679c9652df1faaaff4c391be567", "text": "This paper presents a demonstration of how AI can be useful in the game design and development process of a modern board game. By using an artificial intelligence algorithm to play a substantial amount of matches of the Ticket to Ride board game and collecting data, we can analyze several features of the gameplay as well as of the game board. Results revealed loopholes in the game’s rules and pointed towards trends in how the game is played. We are then led to the conclusion that large scale simulation utilizing artificial intelligence can offer valuable information regarding modern board games and their designs that would ordinarily be prohibitively expensive or time-consuming to discover manually.", "title": "" }, { "docid": "23ff4a40f9a62c8a26f3cc3f8025113d", "text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.", "title": "" }, { "docid": "00bcce935ca2e4d443941b7e90d644c9", "text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.", "title": "" }, { "docid": "0c57dd3ce1f122d3eb11a98649880475", "text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.", "title": "" }, { "docid": "e0f89b22f215c140f69a22e6b573df41", "text": "In this paper, a 10-bit 0.5V 100 kS/s successive approximation register (SAR) analog-to-digital converter (ADC) with a new fully dynamic rail-to-rail comparator is presented. The proposed comparator enhances the input signal range to the rail-to-rail mode, and hence, improves the signal-to-noise ratio (SNR) of the ADC in low supply voltages. The e®ect of the latch o®set voltage is reduced by providing a higher voltage gain in the regenerative latch. To reduce the ADC power consumption further, the binary-weighted capacitive array with an attenuation capacitor (BWA) is employed as the digital-to-analog converter (DAC) in this design. The ADC is designed and simulated in a 90 nm CMOS process with a single 0.5V power supply. Spectre simulation results show that the average power consumption of the proposed ADC is about 400 nW and the peak signal-to-noise plus distortion ratio (SNDR) is 56 dB. By considering 10% increase in total ADC power consumption due to the parasitics and a loss of 0.22 LSB in ENOB due to the DAC capacitors mismatch, the achieved ̄gure of merit (FoM) is 11.4 fJ/conversion-step.", "title": "" }, { "docid": "759831bb109706b6963b21984a59d2d1", "text": "Workflow management systems will change the architecture of future information systems dramatically. The explicit representation of business procedures is one of the main issues when introducing a workflow management system. In this paper we focus on a class of Petri nets suitable for the representation, validation and verification of these procedures. We will show that the correctness of a procedure represented by such a Petri net can be verified by using standard Petri-net-based techniques. Based on this result we provide a comprehensive set of transformation rules which can be used to construct and modify correct procedures.", "title": "" }, { "docid": "a7287ea0f78500670fb32fc874968c54", "text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.", "title": "" }, { "docid": "477be87ed75b8245de5e084a366b7a6d", "text": "This paper addresses the problem of using unmanned aerial vehicles for the transportation of suspended loads. The proposed solution introduces a novel control law capable of steering the aerial robot to a desired reference while simultaneously limiting the sway of the payload. The stability of the equilibrium is proven rigorously through the application of the nested saturation formalism. Numerical simulations demonstrating the effectiveness of the controller are provided.", "title": "" }, { "docid": "c26e9f486621e37d66bf0925d8ff2a3e", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" }, { "docid": "ce7d164774826897e9d7386ec9159bba", "text": "The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter. Multiplication, encryption, and decryption are almost linear in , and addition is linear in . Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for = 1024 and = 16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.", "title": "" } ]
scidocsrr
39f503ca38fba95c34d6da204039c84e
5G Millimeter-Wave Antenna Array: Design and Challenges
[ { "docid": "40d28bd6b2caedec17a0990b8020c918", "text": "The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed.", "title": "" }, { "docid": "ed676ff14af6baf9bde3bdb314628222", "text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.", "title": "" } ]
[ { "docid": "9b2e025c6bb8461ddb076301003df0e4", "text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.", "title": "" }, { "docid": "204ae059e0856f8531b67b707ee3f068", "text": "In highly regulated industries such as aerospace, the introduction of new quality standard can provide the framework for developing and formulating innovative novel business models which become the foundation to build a competitive, customer-centric enterprise. A number of enterprise modeling methods have been developed in recent years mainly to offer support for enterprise design and help specify systems requirements and solutions. However, those methods are inefficient in providing sufficient support for quality systems links and assessment. The implementation parts of the processes linked to the standards remain unclear and ambiguous for the practitioners as a result of new standards introduction. This paper proposed to integrate new revision of AS/EN9100 aerospace quality elements through systematic integration approach which can help the enterprises in business re-engineering process. The assessment capability model is also presented to identify impacts on the existing system as a result of introducing new standards.", "title": "" }, { "docid": "2d17b30942ce0984dcbcf5ca5ba38bd2", "text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.", "title": "" }, { "docid": "14fe4e2fb865539ad6f767b9fc9c1ff5", "text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.", "title": "" }, { "docid": "a1e50fdb1bde8730a3201d771135eb68", "text": "This paper briefly introduces an approach to the problem of building semantic interpretations of nominal ComDounds, i.e. sequences of two or more nouns related through modification. Examples of the kinds of nominal compounds dealt with are: \"engine repairs\", \"aircraft flight arrival\", ~aluminum water pump\", and \"noun noun modification\".", "title": "" }, { "docid": "fd7a6e8eed4391234812018237434283", "text": "Due to the increase of the number of wind turbines connected directly to the electric utility grid, new regulator codes have been issued that require low-voltage ride-through capability for wind turbines so that they can remain online and support the electric grid during voltage sags. Conventional ride-through techniques for the doubly fed induction generator (DFIG) architecture result in compromised control of the turbine shaft and grid current during fault events. In this paper, a series passive-impedance network at the stator side of a DFIG wind turbine is presented. It is easy to control, capable of off-line operation for high efficiency, and low cost for manufacturing and maintenance. The balanced and unbalanced fault responses of a DFIG wind turbine with a series grid side passive-impedance network are examined using computer simulations and hardware experiments.", "title": "" }, { "docid": "2059db0707ffc28fd62b7387ba6d09ae", "text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.", "title": "" }, { "docid": "103ec725b4c07247f1a8884610ea0e42", "text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.", "title": "" }, { "docid": "8284163c893d79213b6573249a0f0a32", "text": "Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.", "title": "" }, { "docid": "e353843f2f5102c263d18382168e2c69", "text": "The number of adult learners who participate in online learning has rapidly grown in the last two decades due to online learning's many advantages. In spite of the growth, the high dropout rate in online learning has been of concern to many higher education institutions and organizations. The purpose of this study was to determine whether persistent learners and dropouts are different in individual characteristics (i.e., age, gender, and educational level), external factors (i.e., family and organizational supports), and internal factors (i.e., satisfaction and relevance as sub-dimensions of motivation). Quantitative data were collected from 147 learners who had dropped out of or finished one of the online courses offered from a large Midwestern university. Dropouts and persistent learners showed statistical differences in perceptions of family and organizational support, and satisfaction and relevance. It was also shown that the theoretical framework, which includes family support, organizational support, satisfaction, and relevance in addition to individual characteristics, is able to predict learners' decision to drop out or persist. Organizational 9upport and relevance were shown to be particularly predictive. The results imply that lower dropout rates can be achieved if online program developers or instrdctors find ways to enhance the relevance of the course. It also implies thai adult learners need to be supported by their organizations in order for them to finish online courses that they register for.", "title": "" }, { "docid": "1501d5173376a06a3b9c30c617abfe31", "text": "^^ir jEdmund Hillary of Mount Everest \\ fajne liked to tell a story about one of ^J Captain Robert Falcon Scott's earlier attempts, from 1901 to 1904, to reach the South Pole. Scott led an expedition made up of men from thb Royal Navy and the merchant marine, as jwell as a group of scientists. Scott had considel'able trouble dealing with the merchant n|arine personnel, who were unaccustomed ip the rigid discipline of Scott's Royal Navy. S|:ott wanted to send one seaman home because he would not take orders, but the seaman refused, arguing that he had signed a contract and knew his rights. Since the seaman wds not subject to Royal Navy disciplinary action, Scott did not know what to do. Then Ernest Shackleton, a merchant navy officer in $cott's party, calmly informed the seaman th^t he, the seaman, was returning to Britain. Again the seaman refused —and Shackle^on knocked him to the ship's deck. After ar^other refusal, followed by a second flooring, the seaman decided he would retuijn home. Scott later became one of the victims of his own inadequacies as a leader in his 1911 race to the South Pole. Shackleton went qn to lead many memorable expeditions; once, seeking help for the rest of his party, who were stranded on the Antarctic Coast, he journeyed with a small crew in a small open boat from the edge of Antarctica to Souilh Georgia Island.", "title": "" }, { "docid": "343a2035ca2136bc38451c0e92aeb7fc", "text": "Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.", "title": "" }, { "docid": "b0741999659724f8fa5dc1117ec86f0d", "text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.", "title": "" }, { "docid": "be7f7d9c6a28b7d15ec381570752de95", "text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.", "title": "" }, { "docid": "0b11d414b25a0bc7262dafc072264ff2", "text": "Selecting appropriate words to compose a sentence is one common problem faced by non-native Chinese learners. In this paper, we propose (bidirectional) LSTM sequence labeling models and explore various features to detect word usage errors in Chinese sentences. By combining CWINDOW word embedding features and POS information, the best bidirectional LSTM model achieves accuracy 0.5138 and MRR 0.6789 on the HSK dataset. For 80.79% of the test data, the model ranks the groundtruth within the top two at position level.", "title": "" }, { "docid": "dca2900c2b002e3119435bcf983c5aac", "text": "Substantial evidence suggests that the accumulation of beta-amyloid (Abeta)-derived peptides contributes to the aetiology of Alzheimer's disease (AD) by stimulating formation of free radicals. Thus, the antioxidant alpha-lipoate, which is able to cross the blood-brain barrier, would seem an ideal substance in the treatment of AD. We have investigated the potential effectiveness of alpha-lipoic acid (LA) against cytotoxicity induced by Abeta peptide (31-35) (30 microM) and hydrogen peroxide (H(2)O(2)) (100 microM) with the cellular 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction and fluorescence dye propidium iodide assays in primary neurons of rat cerebral cortex. We found that treatment with LA protected cortical neurons against cytotoxicity induced by Abeta or H(2)O(2). In addition, LA-induced increase in the level of Akt in the neurons was observed by Western blot. The LA-induced neuroprotection and Akt increase were attenuated by pre-treatment with the phosphatidylinositol 3-kinase inhibitor, LY294002 (50 microM). Our data suggest that the neuroprotective effects of the antioxidant LA are partly mediated through activation of the PKB/Akt signaling pathway.", "title": "" }, { "docid": "07c63e6e7ec9e64e9f19ec099e6c3c00", "text": "Despite their remarkable performance in various machine intelligence tasks, the computational intensity of Convolutional Neural Networks (CNNs) has hindered their widespread utilization in resource-constrained embedded and IoT systems. To address this problem, we present a framework for synthesis of efficient CNN inference software targeting mobile SoC platforms. We argue that thread granularity can substantially impact the performance and energy dissipation of the synthesized inference software, and demonstrate that launching the maximum number of logical threads, often promoted as a guiding principle by GPGPU practitioners, does not result in an efficient implementation for mobile SoCs. We hypothesize that the runtime of a CNN layer on a particular SoC platform can be accurately estimated as a linear function of its computational complexity, which may seem counter-intuitive, as modern mobile SoCs utilize a plethora of heterogeneous architectural features and dynamic resource management policies. Consequently, we develop a principled approach and a data-driven analytical model to optimize granularity of threads during CNN software synthesis. Experimental results with several modern CNNs mapped to a commodity Android smartphone with a Snapdragon SoC show up to 2.37X speedup in application runtime, and up to 1.9X improvement in its energy dissipation compared to existing approaches.", "title": "" }, { "docid": "da981709f7a0ff7f116fe632b7a989db", "text": "A method is presented for locating protein antigenic determinants by analyzing amino acid sequences in order to find the point of greatest local hydrophilicity. This is accomplished by assigning each amino acid a numerical value (hydrophilicity value) and then repetitively averaging these values along the peptide chain. The point of highest local average hydrophilicity is invariably located in, or immediately adjacent to, an antigenic determinant. It was found that the prediction success rate depended on averaging group length, with hexapeptide averages yielding optimal results. The method was developed using 12 proteins for which extensive immunochemical analysis has been carried out and subsequently was used to predict antigenic determinants for the following proteins: hepatitis B surface antigen, influenza hemagglutinins, fowl plague virus hemagglutinin, human histocompatibility antigen HLA-B7, human interferons, Escherichia coli and cholera enterotoxins, ragweed allergens Ra3 and Ra5, and streptococcal M protein. The hepatitis B surface antigen sequence was synthesized by chemical means and was shown to have antigenic activity by radioimmunoassay.", "title": "" }, { "docid": "6f2720e4f63b5d3902810ee5b2c17f2b", "text": "Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy produced by Kruskal’s spanning tree algorithm. In this respect, we also propose a new effective feature selection approach for improving system efficiency. The results show that LSP, if correctly parameterized, produces the same performance as LSSVM, being at the same time much more efficient.", "title": "" }, { "docid": "695264db0ca1251ab0f63b04d41c68cd", "text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.", "title": "" } ]
scidocsrr
64f762aaf0e35b18b6c5c9804f5fcf45
HAGP: A Hub-Centric Asynchronous Graph Processing Framework for Scale-Free Graph
[ { "docid": "216d4c4dc479588fb91a27e35b4cb403", "text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.", "title": "" }, { "docid": "e9b89400c6bed90ac8c9465e047538e7", "text": "Myriad of graph-based algorithms in machine learning and data mining require parsing relational data iteratively. These algorithms are implemented in a large-scale distributed environment to scale to massive data sets. To accelerate these large-scale graph-based iterative computations, we propose delta-based accumulative iterative computation (DAIC). Different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, DAIC updates the result by accumulating the “changes” between iterations. By DAIC, we can process only the “changes” to avoid the negligible updates. Furthermore, we can perform DAIC asynchronously to bypass the high-cost synchronous barriers in heterogeneous distributed environments. Based on the DAIC model, we design and implement an asynchronous graph processing framework, Maiter. We evaluate Maiter on local cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves as much as 60 × speedup over Hadoop and outperforms other state-of-the-art frameworks.", "title": "" } ]
[ { "docid": "3f5f7b099dff64deca2a265c89ff481e", "text": "We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.", "title": "" }, { "docid": "176dc8d5d0ed24cc9822924ae2b8ca9b", "text": "Detection of image forgery is an important part of digital forensics and has attracted a lot of attention in the past few years. Previous research has examined residual pattern noise, wavelet transform and statistics, image pixel value histogram and other features of images to authenticate the primordial nature. With the development of neural network technologies, some effort has recently applied convolutional neural networks to detecting image forgery to achieve high-level image representation. This paper proposes to build a convolutional neural network different from the related work in which we try to understand extracted features from each convolutional layer and detect different types of image tampering through automatic feature learning. The proposed network involves five convolutional layers, two full-connected layers and a Softmax classifier. Our experiment has utilized CASIA v1.0, a public image set that contains authentic images and splicing images, and its further reformed versions containing retouching images and re-compressing images as the training data. Experimental results can clearly demonstrate the effectiveness and adaptability of the proposed network.", "title": "" }, { "docid": "5c0a3aa0a50487611a64905655164b89", "text": "Cloud radio access network (C-RAN) refers to the visualization of base station functionalities by means of cloud computing. This results in a novel cellular architecture in which low-cost wireless access points, known as radio units or remote radio heads, are centrally managed by a reconfigurable centralized \"cloud\", or central, unit. C-RAN allows operators to reduce the capital and operating expenses needed to deploy and maintain dense heterogeneous networks. This critical advantage, along with spectral efficiency, statistical multiplexing and load balancing gains, make C-RAN well positioned to be one of the key technologies in the development of 5G systems. In this paper, a succinct overview is presented regarding the state of the art on the research on C-RAN with emphasis on fronthaul compression, baseband processing, medium access control, resource allocation, system-level considerations and standardization efforts.", "title": "" }, { "docid": "95bbe5d13f3ca5f97d01f2692a9dc77a", "text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.", "title": "" }, { "docid": "af973255ab5f85a5dfb8dd73c19891a0", "text": "I use the example of the 2000 US Presidential election to show that political controversies with technical underpinnings are not resolved by technical means. Then, drawing from examples such as climate change, genetically modified foods, and nuclear waste disposal, I explore the idea that scientific inquiry is inherently and unavoidably subject to becoming politicized in environmental controversies. I discuss three reasons for this. First, science supplies contesting parties with their own bodies of relevant, legitimated facts about nature, chosen in part because they help make sense of, and are made sensible by, particular interests and normative frameworks. Second, competing disciplinary approaches to understanding the scientific bases of an environmental controversy may be causally tied to competing value-based political or ethical positions. The necessity of looking at nature through a variety of disciplinary lenses brings with it a variety of normative lenses, as well. Third, it follows from the foregoing that scientific uncertainty, which so often occupies a central place in environmental controversies, can be understood not as a lack of scientific understanding but as the lack of coherence among competing scientific understandings, amplified by the various political, cultural, and institutional contexts within which science is carried out. In light of these observations, I briefly explore the problem of why some types of political controversies become “scientized” and others do not, and conclude that the value bases of disputes underlying environmental controversies must be fully articulated and adjudicated through political means before science can play an effective role in resolving environmental problems. © 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6b6b4de917de527351939c3493581275", "text": "Several studies have used the Edinburgh Postnatal Depression Scale (EPDS), developed to screen new mothers, also for new fathers. This study aimed to further contribute to this knowledge by comparing assessment of possible depression in fathers and associated demographic factors by the EPDS and the Gotland Male Depression Scale (GMDS), developed for \"male\" depression screening. The study compared EPDS score ≥10 and ≥12, corresponding to minor and major depression, respectively, in relation to GMDS score ≥13. At 3-6 months after child birth, a questionnaire was sent to 8,011 fathers of whom 3,656 (46%) responded. The detection of possibly depressed fathers by EPDS was 8.1% at score ≥12, comparable to the 8.6% detected by the GMDS. At score ≥10, the proportion detected by EPDS increased to 13.3%. Associations with possible risk factors were analyzed for fathers detected by one or both scales. A low income was associated with depression in all groups. Fathers detected by EPDS alone were at higher risk if they had three or more children, or lower education. Fathers detected by EPDS alone at score ≥10, or by both scales at EPDS score ≥12, more often were born in a foreign country. Seemingly, the EPDS and the GMDS are associated with different demographic risk factors. The EPDS score appears critical since 5% of possibly depressed fathers are excluded at EPDS cutoff 12. These results suggest that neither scale alone is sufficient for depression screening in new fathers, and that the decision of EPDS cutoff is crucial.", "title": "" }, { "docid": "4d2c5785e60fa80febb176165622fca7", "text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.", "title": "" }, { "docid": "5946378b291a1a0e1fb6df5cd57d716f", "text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)", "title": "" }, { "docid": "a27b626618e225b03bec1eea8327be4d", "text": "As a fundamental preprocessing of various multimedia applications, object proposal aims to detect the candidate windows possibly containing arbitrary objects in images with two typical strategies, window scoring and grouping. In this paper, we first analyze the feasibility of improving object proposal performance by integrating window scoring and grouping strategies. Then, we propose a novel object proposal method for RGB-D images, named elastic edge boxes. The initial bounding boxes of candidate object regions are efficiently generated by edge boxes, and further adjusted by grouping the super-pixels within elastic range to obtain more accurate candidate windows. To validate the proposed method, we construct the largest RGB-D image data set NJU1800 for object proposal with balanced object number distribution. The experimental results show that our method can effectively and efficiently generate the candidate windows of object regions and it outperforms the state-of-the-art methods considering both accuracy and efficiency.", "title": "" }, { "docid": "8654b1d03f46c1bb94b237977c92ff02", "text": "Many studies suggest using coverage concepts, such as branch coverage, as the starting point of testing, while others as the most prominent test quality indicator. Yet the relationship between coverage and fault-revelation remains unknown, yielding uncertainty and controversy. Most previous studies rely on the Clean Program Assumption, that a test suite will obtain similar coverage for both faulty and fixed ('clean') program versions. This assumption may appear intuitive, especially for bugs that denote small semantic deviations. However, we present evidence that the Clean Program Assumption does not always hold, thereby raising a critical threat to the validity of previous results. We then conducted a study using a robust experimental methodology that avoids this threat to validity, from which our primary finding is that strong mutation testing has the highest fault revelation of four widely-used criteria. Our findings also revealed that fault revelation starts to increase significantly only once relatively high levels of coverage are attained.", "title": "" }, { "docid": "897fb39d295defc4b6e495236a2c74b1", "text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.", "title": "" }, { "docid": "1e9e3fce7ae4e980658997c2984f05cb", "text": "BACKGROUND\nMotivation in learning behaviour and education is well-researched in general education, but less in medical education.\n\n\nAIM\nTo answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation.\n\n\nMETHODS\nA literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included.\n\n\nRESULTS\nFindings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT.\n\n\nCONCLUSION\nMotivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.", "title": "" }, { "docid": "7b341e406c28255d3cb4df5c4665062d", "text": "We propose MRU (Multi-Range Reasoning Units), a new fast compositional encoder for machine comprehension (MC). Our proposed MRU encoders are characterized by multi-ranged gating, executing a series of parameterized contractand-expand layers for learning gating vectors that benefit from long and short-term dependencies. The aims of our approach are as follows: (1) learning representations that are concurrently aware of long and short-term context, (2) modeling relationships between intra-document blocks and (3) fast and efficient sequence encoding. We show that our proposed encoder demonstrates promising results both as a standalone encoder and as well as a complementary building block. We conduct extensive experiments on three challenging MC datasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive performance on all. On the RACE benchmark, our model outperforms DFN (Dynamic Fusion Networks) by 1.5% − 6% without using any recurrent or convolution layers. Similarly, we achieve competitive performance relative to AMANDA [17] on the SearchQA benchmark and BiDAF [23] on the NarrativeQA benchmark without using any LSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM architectures further improves performance, achieving state-of-the-art results.", "title": "" }, { "docid": "d78609519636e288dae4b1fce36cb7a6", "text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.", "title": "" }, { "docid": "c404e6ecb21196fec9dfeadfcb5d4e4b", "text": "The goal of leading indicators for safety is to identify the potential for an accident before it occurs. Past efforts have focused on identifying general leading indicators, such as maintenance backlog, that apply widely in an industry or even across industries. Other recommendations produce more system-specific leading indicators, but start from system hazard analysis and thus are limited by the causes considered by the traditional hazard analysis techniques. Most rely on quantitative metrics, often based on probabilistic risk assessments. This paper describes a new and different approach to identifying system-specific leading indicators and provides guidance in designing a risk management structure to generate, monitor and use the results. The approach is based on the STAMP (SystemTheoretic Accident Model and Processes) model of accident causation and tools that have been designed to build on that model. STAMP extends current accident causality to include more complex causes than simply component failures and chains of failure events or deviations from operational expectations. It incorporates basic principles of systems thinking and is based on systems theory rather than traditional reliability theory.", "title": "" }, { "docid": "d2c0e71db2957621eca42bdc221ffb8f", "text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.", "title": "" }, { "docid": "ff83e090897ed7b79537392801078ffb", "text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.", "title": "" }, { "docid": "6562b9b46d17bf983bcef7f486ecbc36", "text": "Upper-extremity venous thrombosis often presents as unilateral arm swelling. The differential diagnosis includes lesions compressing the veins and causing a functional venous obstruction, venous stenosis, an infection causing edema, obstruction of previously functioning lymphatics, or the absence of sufficient lymphatic channels to ensure effective drainage. The following recommendations are made with the understanding that venous disease, specifically venous thrombosis, is the primary diagnosis to be excluded or confirmed in a patient presenting with unilateral upper-extremity swelling. Contrast venography remains the best reference-standard diagnostic test for suspected upper-extremity acute venous thrombosis and may be needed whenever other noninvasive strategies fail to adequately image the upper-extremity veins. Duplex, color flow, and compression ultrasound have also established a clear role in evaluation of the more peripheral veins that are accessible to sonography. Gadolinium contrast-enhanced MRI is routinely used to evaluate the status of the central veins. Delayed CT venography can often be used to confirm or exclude more central vein venous thrombi, although substantial contrast loads are required. The ACR Appropriateness Criteria(®) are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.", "title": "" }, { "docid": "eb6643fba28b6b84b4d51a565fc97be0", "text": "The spiral antenna is a well known kind of wideband antenna. The challenges to improve its design are numerous, such as creating a compact wideband matched feeding or controlling the radiation pattern. Here we propose a self matched and compact slot spiral antenna providing a unidirectional pattern.", "title": "" } ]
scidocsrr
797dad33f2f98c2954816565895666ba
BRISK: Binary Robust invariant scalable keypoints
[ { "docid": "e32f77e31a452ae6866652ce69c5faaa", "text": "The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presented which outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance. We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.", "title": "" } ]
[ { "docid": "2a4eb6d12a50034b5318d246064cb86e", "text": "In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fréchet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.", "title": "" }, { "docid": "1a6ece40fa87e787f218902eba9b89f7", "text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.", "title": "" }, { "docid": "05ba530d5f07e141d18c3f9b92a6280d", "text": "In this paper, we introduce autoencoder ensembles for unsupervised outlier detection. One problem with neural networks is that they are sensitive to noise and often require large data sets to work robustly, while increasing data size makes them slow. As a result, there are only a few existing works in the literature on the use of neural networks in outlier detection. This paper shows that neural networks can be a very competitive technique to other existing methods. The basic idea is to randomly vary on the connectivity architecture of the autoencoder to obtain significantly better performance. Furthermore, we combine this technique with an adaptive sampling method to make our approach more efficient and effective. Experimental results comparing the proposed approach with state-of-theart detectors are presented on several benchmark data sets showing the accuracy of our approach.", "title": "" }, { "docid": "b0eea601ef87dbd1d7f39740ea5134ae", "text": "Syndromal classification is a well-developed diagnostic system but has failed to deliver on its promise of the identification of functional pathological processes. Functional analysis is tightly connected to treatment but has failed to develop testable. replicable classification systems. Functional diagnostic dimensions are suggested as a way to develop the functional classification approach, and experiential avoidance is described as 1 such dimension. A wide range of research is reviewed showing that many forms of psychopathology can be conceptualized as unhealthy efforts to escape and avoid emotions, thoughts, memories, and other private experiences. It is argued that experiential avoidance, as a functional diagnostic dimension, has the potential to integrate the efforts and findings of researchers from a wide variety of theoretical paradigms, research interests, and clinical domains and to lead to testable new approaches to the analysis and treatment of behavioral disorders. Steven C. Haves, Kelly G. Wilson, Elizabeth V. Gifford, and Victoria M. Follette. Department of Psychology. University of Nevada: Kirk Strosahl, Mental Health Center, Group Health Cooperative, Seattle, Washington. Preparation of this article was supported in part by Grant DA08634 from the National Institute on Drug Abuse. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, Mailstop 296, College of Arts and Science. University of Nevada, Reno, Nevada 89557-0062. The process of classification lies at the root of all scientific behavior. It is literally impossible to speak about a truly unique event, alone and cut off from all others, because words themselves are means of categorization (Brunei, Goodnow, & Austin, 1956). Science is concerned with refined and systematic verbal formulations of events and relations among events. Because \"events\" are always classes of events, and \"relations\" are always classes of relations, classification is one of the central tasks of science. The field of psychopathology has seen myriad classification systems (Hersen & Bellack, 1988; Sprock & Blashfield, 1991). The differences among some of these approaches are both long-standing and relatively unchanging, in part because systems are never free from a priori assumptions and guiding principles that provide a framework for organizing information (Adams & Cassidy, 1993). In the present article, we briefly examine the differences between two core classification strategies in psychopathology syndromal and functional. We then articulate one possible functional diagnostic dimension: experiential avoidance. Several common syndromal categories are examined to see how this dimension can organize data found among topographical groupings. Finally, the utility and implications of this functional dimensional category are examined. Comparing Syndromal and Functional Classification Although there are many purposes to diagnostic classification, most researchers seem to agree that the ultimate goal is the development of classes, dimensions, or relational categories that can be empirically wedded to treatment strategies (Adams & Cassidy, 1993: Hayes, Nelson & Jarrett, 1987: Meehl, 1959). Syndromal classification – whether dimensional or categorical – can be traced back to Wundt and Galen and, thus, is as old as scientific psychology itself (Eysenck, 1986). Syndromal classification starts with constellations of signs and symptoms to identify the disease entities that are presumed to give rise to these constellations. Syndromal classification thus starts with structure and, it is hoped, ends with utility. The attempt in functional classification, conversely, is to start with utility by identifying functional processes with clear treatment implications. It then works backward and returns to the issue of identifiable signs and symptoms that reflect these processes. These differences are fundamental. Syndromal Classification The economic and political dominance of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (e.g., 4th ed.; DSM -IV; American Psychiatric Association, 1994) has lead to a worldwide adoption of syndromal classification as an analytic strategy in psychopathology. The only widely used alternative, the International Classification of Diseases (ICD) system, was a source document for the original DSM, and continuous efforts have been made to ensure their ongoing compatibility (American Psychiatric Association 1994). The immediate goal of syndromal classification (Foulds. 1971) is to identify collections of signs (what one sees) and symptoms (what the client's complaint is). The hope is that these syndromes will lead to the identification of disorders with a known etiology, course, and response to treatment. When this has been achieved, we are no longer speaking of syndromes but of diseases. Because the construct of disease involves etiology and response to treatment, these classifications are ultimately a kind of functional unit. Thus, the syndromal classification approach is a topographically oriented classification strategy for the identification of functional units of abnormal behavior. When the same topographical outcome can be established by diverse processes, or when very different topographical outcomes can come from the same process, the syndromal model has a difficult time actually producing its intended functional units (cf. Bandura, 1982; Meehl, 1978). Some medical problems (e.g., cancer) have these features, and in these areas medical researchers no longer look to syndromal classification as a quick route to an understanding of the disease processes involved. The link between syndromes (topography of signs and symptoms) and diseases (function) has been notably weak in psychopathology. After over 100 years of effort, almost no psychological diseases have been clearly identified. With the exception of general paresis and a few clearly neurological disorders, psychiatric syndromes have remained syndromes indefinitely. In the absence of progress toward true functional entities, syndromal classification of psychopathology has several down sides. Symptoms are virtually non-falsifiable, because they depend only on certain formal features. Syndromal categories tend to evolve changing their names frequently and splitting into ever finer subcategories but except for political reasons (e.g., homosexuality as a disorder) they rarely simply disappear. As a result, the number of syndromes within the DSM system has increased exponentially (Follette, Houts, & Hayes, 1992). Increasingly refined topographical distinctions can always be made without the restraining and synthesizing effect of the identification of common etiological processes. In physical medicine, syndromes regularly disappear into disease categories. A wide variety of symptoms can be caused by a single disease, or a common symptom can be explained by very different diseases entities. For example, \"headaches\" are not a disease, because they could be due to influenza, vision problems, ruptured blood vessels, or a host of other factors. These etiological factors have very different treatment implications. Note that the reliability of symptom detection is not what is at issue. Reliably diagnosing headaches does not translate into reliably diagnosing the underlying functional entity, which after all is the crucial factor for treatment decisions. In the same way, the increasing reliability of DSM diagnoses is of little consolation in and of itself. The DSM system specifically eschews the primary importance of functional processes: \"The approach taken in DSM-III is atheoretical with regard to etiology or patho-physiological process\" (American Psychiatric Association, 1980, p. 7). This spirit of etiological agnosticism is carried forward in the most recent DSM incarnation. It is meant to encourage users from widely varying schools of psychology to use the same classification system. Although integration is a laudable goal, the price paid may have been too high (Follette & Hayes, 1992). For example, the link between syndromal categories and biological markers or change processes has been consistently disappointing. To date, compellingly sensitive and specific physiological markers have not been identified for any psychiatric syndrome (Hoes, 1986). Similarly, the link between syndromes and differential treatment has long been known to be weak (see Hayes et al., 1987). We still do not have compelling evidence that syndromal classification contributes substantially to treatment outcome (Hayes et al., 1987). Even in those few instances and not others, mechanisms of change are often unclear of unexamined (Follette, 1995), in part because syndromal categories give researchers few leads about where even to look. Without attention to etiology, treatment utility, and pathological process, the current syndromal system seems unlikely to evolve rapidly into a functional, theoretically relevant system. Functional Classification In a functional approach to classification, the topographical characteristics of any particular individual's behavior is not the basis for classification; instead, behaviors and sets of behaviors are organized by the functional processes that are thought to have produced and maintained them. This functional method is inherently less direct and naive than a syndromal approach, as it requires the application of pre-existing information about psychological processes to specific response forms. It thus integrates at least rudimentary forms of theory into the classification strategy, in sharp contrast with the atheoretical goals of the DSM system. Functional Diagnostic Dimensions as a Method of Functional Classification Classical functional analysis is the most dominant example of a functional classification system. It consists of six steps (Hayes & Follette, 1992) -Step 1: identify potentially relevant characterist", "title": "" }, { "docid": "3ab85b8f58e60f4e59d6be49648ce290", "text": "It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certi cation Authority. However this entity is commonly o -line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password and the reality is that we cannot expect a client to be su ciently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover les containing information related to passwords, and unless the passwords are of su ciently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.", "title": "" }, { "docid": "15a37341901e410e2754ae46d7ba11e7", "text": "Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Usually, these processes must be completed in a certain time window; thus, it is necessary to optimize their execution time. In this paper, we delve into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide algorithms towards the minimization of the execution cost of an ETL workflow.", "title": "" }, { "docid": "ea544ffc7eeee772388541d0d01812a7", "text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.", "title": "" }, { "docid": "f63503eb721aa7c1fd6b893c2c955fdf", "text": "In 2008, financial tsunami started to impair the economic development of many countries, including Taiwan. The prediction of financial crisis turns to be much more important and doubtlessly holds public attention when the world economy goes to depression. This study examined the predictive ability of the four most commonly used financial distress prediction models and thus constructed reliable failure prediction models for public industrial firms in Taiwan. Multiple discriminate analysis (MDA), logit, probit, and artificial neural networks (ANNs) methodology were employed to a dataset of matched sample of failed and non-failed Taiwan public industrial firms during 1998–2005. The final models are validated using within sample test and out-of-the-sample test, respectively. The results indicated that the probit, logit, and ANN models which used in this study achieve higher prediction accuracy and possess the ability of generalization. The probit model possesses the best and stable performance. However, if the data does not satisfy the assumptions of the statistical approach, then the ANN approach would demonstrate its advantage and achieve higher prediction accuracy. In addition, the models which used in this study achieve higher prediction accuracy and possess the ability of generalization than those of [Altman, Financial ratios—discriminant analysis and the prediction of corporate bankruptcy using capital market data, Journal of Finance 23 (4) (1968) 589–609, Ohlson, Financial ratios and the probability prediction of bankruptcy, Journal of Accounting Research 18 (1) (1980) 109–131, and Zmijewski, Methodological issues related to the estimation of financial distress prediction models, Journal of Accounting Research 22 (1984) 59–82]. In summary, the models used in this study can be used to assist investors, creditors, managers, auditors, and regulatory agencies in Taiwan to predict the probability of business failure. & 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5d8fc02f96206da7ccb112866951d4c7", "text": "Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.", "title": "" }, { "docid": "bf9d706685f76877a56d323423b32a5c", "text": "BACKGROUND\nFine particulate air pollution has been linked to cardiovascular disease, but previous studies have assessed only mortality and differences in exposure between cities. We examined the association of long-term exposure to particulate matter of less than 2.5 microm in aerodynamic diameter (PM2.5) with cardiovascular events.\n\n\nMETHODS\nWe studied 65,893 postmenopausal women without previous cardiovascular disease in 36 U.S. metropolitan areas from 1994 to 1998, with a median follow-up of 6 years. We assessed the women's exposure to air pollutants using the monitor located nearest to each woman's residence. Hazard ratios were estimated for the first cardiovascular event, adjusting for age, race or ethnic group, smoking status, educational level, household income, body-mass index, and presence or absence of diabetes, hypertension, or hypercholesterolemia.\n\n\nRESULTS\nA total of 1816 women had one or more fatal or nonfatal cardiovascular events, as confirmed by a review of medical records, including death from coronary heart disease or cerebrovascular disease, coronary revascularization, myocardial infarction, and stroke. In 2000, levels of PM2.5 exposure varied from 3.4 to 28.3 microg per cubic meter (mean, 13.5). Each increase of 10 microg per cubic meter was associated with a 24% increase in the risk of a cardiovascular event (hazard ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41) and a 76% increase in the risk of death from cardiovascular disease (hazard ratio, 1.76; 95% CI, 1.25 to 2.47). For cardiovascular events, the between-city effect appeared to be smaller than the within-city effect. The risk of cerebrovascular events was also associated with increased levels of PM2.5 (hazard ratio, 1.35; 95% CI, 1.08 to 1.68).\n\n\nCONCLUSIONS\nLong-term exposure to fine particulate air pollution is associated with the incidence of cardiovascular disease and death among postmenopausal women. Exposure differences within cities are associated with the risk of cardiovascular disease.", "title": "" }, { "docid": "7dcc7cdff8a9196c716add8a1faf0203", "text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.", "title": "" }, { "docid": "a7959808cb41963e8d204c3078106842", "text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.", "title": "" }, { "docid": "5a11ab9ece5295d4d1d16401625ab3d4", "text": "The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.", "title": "" }, { "docid": "9573bb5596dcec8668e9ba1b38d0b310", "text": "Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.", "title": "" }, { "docid": "26c003f70bbaade54b84dcb48d2a08c9", "text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.", "title": "" }, { "docid": "c1c9730b191f2ac9186ac704fd5b929f", "text": "This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.", "title": "" }, { "docid": "9072c5ad2fbba55bdd50b5969862f7c3", "text": "Parametricism has come to scene as an important style in both architectural design and construction where conventional Computer-Aided Design (CAD) tool has become substandard. Building Information Modeling (BIM) is a recent object-based parametric modeling tool for exploring the relationship between the geometric and non-geometric components of the model. The aim of this research is to explore the capabilities of BIM in achieving variety and flexibility in design extending from architectural to urban scale. This study proposes a method by using User Interface (UI) and Application Programming Interface (API) tools of BIM to generate a complex roof structure as a parametric family. This project demonstrates a dynamic variety in architectural scale. We hypothesized that if a function calculating the roof length is defined using a variety of inputs, it can later be applied to urban scale by utilizing a database of the inputs.", "title": "" }, { "docid": "3b06bc2d72e0ae7fa75873ed70e23fc3", "text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.", "title": "" }, { "docid": "e757926fbaec4097530b9a00c1278b1c", "text": "Many fish populations have both resident and migratory individuals. Migrants usually grow larger and have higher reproductive potential but lower survival than resident conspecifics. The ‘decision’ about migration versus residence probably depends on the individual growth rate, or a physiological process like metabolic rate which is correlated with growth rate. Fish usually mature as their somatic growth levels off, where energetic costs of maintenance approach energetic intake. After maturation, growth also stagnates because of resource allocation to reproduction. Instead of maturation, however, fish may move to an alternative feeding habitat and their fitness may thereby be increased. When doing so, maturity is usually delayed, either to the new asymptotic length, or sooner, if that gives higher expected fitness. Females often dominate among migrants and males among residents. The reason is probably that females maximize their fitness by growing larger, because their reproductive success generally increases exponentially with body size. Males, on the other hand, may maximize fitness by alternative life histories, e.g. fighting versus sneaking, as in many salmonid species where small residents are the sneakers and large migrants the fighters. Partial migration appears to be partly developmental, depending on environmental conditions, and partly genetic, inherited as a quantitative trait influenced by a number of genes.", "title": "" } ]
scidocsrr
68e7ad7ce70918a0d31e9949a4f6095f
Nested Mini-Batch K-Means
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "cda19d99a87ca769bb915167f8a842e8", "text": "Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.", "title": "" } ]
[ { "docid": "348702d85126ed64ca24bdc62c1146d9", "text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.", "title": "" }, { "docid": "c03ae003e3fd6503822480267108e2a6", "text": "A relatively simple model of the phonological loop (A. D. Baddeley, 1986), a component of working memory, has proved capable of accommodating a great deal of experimental evidence from normal adult participants, children, and neuropsychological patients. Until recently, however, the role of this subsystem in everyday cognitive activities was unclear. In this article the authors review studies of word learning by normal adults and children, neuropsychological patients, and special developmental populations, which provide evidence that the phonological loop plays a crucial role in learning the novel phonological forms of new words. The authors propose that the primary purpose for which the phonological loop evolved is to store unfamiliar sound patterns while more permanent memory records are being constructed. Its use in retaining sequences of familiar words is, it is argued, secondary.", "title": "" }, { "docid": "c55afb93606ddb88f0a9274f06eca68b", "text": "Social media use continues to grow and is especially prevalent among young adults. It is surprising then that, in spite of this enhanced interconnectivity, young adults may be lonelier than other age groups, and that the current generation may be the loneliest ever. We propose that only image-based platforms (e.g., Instagram, Snapchat) have the potential to ameliorate loneliness due to the enhanced intimacy they offer. In contrast, text-based platforms (e.g., Twitter, Yik Yak) offer little intimacy and should have no effect on loneliness. This study (N 1⁄4 253) uses a mixed-design survey to test this possibility. Quantitative results suggest that loneliness may decrease, while happiness and satisfaction with life may increase, as a function of image-based social media use. In contrast, text-based media use appears ineffectual. Qualitative results suggest that the observed effects may be due to the enhanced intimacy offered by imagebased (versus text-based) social media use. © 2016 Published by Elsevier Ltd. “The more advanced the technology, on the whole, the more possible it is for a considerable number of human beings to imagine being somebody else.” -sociologist David Riesman.", "title": "" }, { "docid": "509d77cef3f9ded37f75b0b1a1314e81", "text": "Object class detection has been a synonym for 2D bounding box localization for the longest time, fueled by the success of powerful statistical learning techniques, combined with robust image representations. Only recently, there has been a growing interest in revisiting the promise of computer vision from the early days: to precisely delineate the contents of a visual scene, object by object, in 3D. In this paper, we draw from recent advances in object detection and 2D-3D object lifting in order to design an object class detector that is particularly tailored towards 3D object class detection. Our 3D object class detection method consists of several stages gradually enriching the object detection output with object viewpoint, keypoints and 3D shape estimates. Following careful design, in each stage it constantly improves the performance and achieves state-of-the-art performance in simultaneous 2D bounding box and viewpoint estimation on the challenging Pascal3D+ [50] dataset.", "title": "" }, { "docid": "42b287804a9ce6497c3e491b3baa9a6f", "text": "Smothering is defined as an obstruction of the air passages above the level of the epiglottis, including the nose, mouth, and pharynx. This is in contrast to choking, which is considered to be due to an obstruction of the air passages below the epiglottis. The manner of death in smothering can be homicidal, suicidal, or an accident. Accidental smothering is considered to be a rare event among middle-aged adults, yet many cases still occur. Presented here is the case of a 39-year-old woman with a history of bipolar disease who was found dead on her living room floor by her neighbors. Her hands were covered in scratches and her pet cat was found disemboweled in the kitchen with its tail hacked off. On autopsy her stomach was found to be full of cat intestines, adipose tissue, and strips of fur-covered skin. An intact left kidney and adipose tissue were found lodged in her throat just above her epiglottis. After a complete investigation, the cause of death was determined to be asphyxia by smothering due to animal tissue.", "title": "" }, { "docid": "346e160403ff9eb55c665f6cb8cca481", "text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.", "title": "" }, { "docid": "deccc92276cca4d064b0161fd8ee7dd9", "text": "Vast amount of information is available on web. Data analysis applications such as extracting mutual funds information from a website, daily extracting opening and closing price of stock from a web page involves web data extraction. Huge efforts are made by lots of researchers to automate the process of web data scraping. Lots of techniques depends on the structure of web page i.e. html structure or DOM tree structure to scrap data from web page. In this paper we are presenting survey of HTML aware web scrapping techniques. Keywords— DOM Tree, HTML structure, semi structured web pages, web scrapping and Web data extraction.", "title": "" }, { "docid": "ad00866e5bae76020e02c6cc76360ec8", "text": "The CASAS architecture facilitates the development and implementation of future smart home technologies by offering an easy-to-install lightweight design that provides smart home capabilities out of the box with no customization or training.", "title": "" }, { "docid": "e76afdc4a867789e6bcc92876a6b52af", "text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions&apos; (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.", "title": "" }, { "docid": "ced4a8b19405839cc948d877e3a42c95", "text": "18-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET)/computed tomography (CT) is currently the most valuable imaging technique in Hodgkin lymphoma. Since its first use in lymphomas in the 1990s, it has become the gold standard in the staging and end-of-treatment remission assessment in patients with Hodgkin lymphoma. The possibility of using early (interim) PET during first-line therapy to evaluate chemosensitivity and thus personalize treatment at this stage holds great promise, and much attention is now being directed toward this goal. With high probability, it is believed that in the near future, the result of interim PET-CT would serve as a compass to optimize treatment. Also the role of PET in pre-transplant assessment is currently evolving. Much controversy surrounds the possibility of detecting relapse after completed treatment with the use of PET in surveillance in the absence of symptoms suggestive of recurrence and the results of published studies are rather discouraging because of low positive predictive value. This review presents current knowledge about the role of 18-FDG-PET/CT imaging at each point of management of patients with Hodgkin lymphoma.", "title": "" }, { "docid": "8759277ebf191306b3247877e2267173", "text": "As organizations scale up, their collective knowledge increases, and the potential for serendipitous collaboration between members grows dramatically. However, finding people with the right expertise or interests becomes much more difficult. Semi-structured social media, such as blogs, forums, and bookmarking, present a viable platform for collaboration-if enough people participate, and if shared content is easily findable. Within the trusted confines of an organization, users can trade anonymity for a rich identity that carries information about their role, location, and position in its hierarchy.\n This paper describes WaterCooler, a tool that aggregates shared internal social media and cross-references it with an organization's directory. We deployed WaterCooler in a large global enterprise and present the results of a preliminary user study. Despite the lack of complete social networking affordances, we find that WaterCooler changed users' perceptions of their workplace, made them feel more connected to each other and the company, and redistributed users' attention outside their own business groups.", "title": "" }, { "docid": "8af1865e0adfedb11d9ade95bb39f797", "text": "In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoustic signal of music. To address this problem, we must develop models for both data collected from humans describing their perceptions of musical mood and quantitative features derived from the audio signal. In previous work, we have presented a collaborative game, MoodSwings, which records dynamic (per-second) mood ratings from multiple players within the two-dimensional Arousal-Valence representation of emotion. Using this data, we present a system linking models of acoustic features and human data to provide estimates of the emotional content of music according to the arousal-valence space. Furthermore, in keeping with the dynamic nature of musical mood we demonstrate the potential of this approach to track the emotional changes in a song over time. We investigate the utility of a range of acoustic features based on psychoacoustic and music-theoretic representations of the audio for this application. Finally, a simplified version of our system is re-incorporated into MoodSwings as a simulated partner for single-players, providing a potential platform for furthering perceptual studies and modeling of musical mood.", "title": "" }, { "docid": "8a128a099087c3dee5bbca7b2a8d8dc4", "text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent, in the sense that either each of them possesses a polynomial-bounded algorithm or none of them does.", "title": "" }, { "docid": "7a3441773c79b9fde64ebcf8103616a1", "text": "SIMD parallelism has become an increasingly important mechanism for delivering performance in modern CPUs, due its power efficiency and relatively low cost in die area compared to other forms of parallelism. Unfortunately, languages and compilers for CPUs have not kept up with the hardware's capabilities. Existing CPU parallel programming models focus primarily on multi-core parallelism, neglecting the substantial computational capabilities that are available in CPU SIMD vector units. GPU-oriented languages like OpenCL support SIMD but lack capabilities needed to achieve maximum efficiency on CPUs and suffer from GPU-driven constraints that impair ease of use on CPUs. We have developed a compiler, the Intel R® SPMD Program Compiler (ispc), that delivers very high performance on CPUs thanks to effective use of both multiple processor cores and SIMD vector units. ispc draws from GPU programming languages, which have shown that for many applications the easiest way to program SIMD units is to use a single-program, multiple-data (SPMD) model, with each instance of the program mapped to one SIMD lane. We discuss language features that make ispc easy to adopt and use productively with existing software systems and show that ispc delivers up to 35x speedups on a 4-core system and up to 240× speedups on a 40-core system for complex workloads (compared to serial C++ code).", "title": "" }, { "docid": "f249a6089a789e52eeadc8ae16213bc1", "text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.", "title": "" }, { "docid": "7d4d0e4d99b5dfe675f5f4eff5e5679f", "text": "Remote work and intensive use of Information Technologies (IT) are increasingly common in organizations. At the same time, professional stress seems to develop. However, IS research has paid little attention to the relationships between these two phenomena. The purpose of this research in progress is to present a framework that introduces the influence of (1) new spatial and temporal constraints and of (2) intensive use of IT on employee emotions at work. Specifically, this paper relies on virtuality (e.g. Chudoba et al. 2005) and media richness (Daft and Lengel 1984) theories to determine the emotional consequences of geographically distributed work.", "title": "" }, { "docid": "ab2096798261a8976846c5f72eeb18ee", "text": "ion Description and Purpose Variable names Provide human readable names to data addresses Function names Provide human readable names to function addresses Control structures Eliminate ‘‘spaghetti’’ code (The ‘‘goto’’ statement is no longer necessary.) Argument passing Default argument values, keyword specification of arguments, variable length argument lists, etc. Data structures Allow conceptual organization of data Data typing Binds the type of the data to the type of the variable Static Insures program correctness, sacrificing generality. Dynamic Greater generality, sacrificing guaranteed correctness. Inheritance Allows creation of families of related types and easy re-use of common functionality Message dispatch Providing one name to multiple implementations of the same concept Single dispatch Dispatching to a function based on the run-time type of one argument Multiple dispatch Dispatching to a function based on the run-time type of multiple arguments. Predicate dispatch Dispatching to a function based on run-time state of arguments Garbage collection Automated memory management Closures Allow creation, combination, and use of functions as first-class values Lexical binding Provides access to values in the defining context Dynamic binding Provides access to values in the calling context (.valueEnvir in SC) Co-routines Synchronous cooperating processes Threads Asynchronous processes Lazy evaluation Allows the order of operations not to be specified. Infinitely long processes and infinitely large data structures can be specified and used as needed. Applying Language Abstractions to Computer Music The SuperCollider language provides many of the abstractions listed above. SuperCollider is a dynamically typed, single-inheritance, single-argument dispatch, garbage-collected, object-oriented language similar to Smalltalk (www.smalltalk.org). In SuperCollider, everything is an object, including basic types like letters and numbers. Objects in SuperCollider are organized into classes. The UGen class provides the abstraction of a unit generator, and the Synth class represents a group of UGens operating as a group to generate output. An instrument is constructed functionally. That is, when one writes a sound-processing function, one is actually writing a function that creates and connects unit generators. This is different from a procedural or static object specification of a network of unit generators. Instrument functions in SuperCollider can generate the network of unit generators using the full algorithmic capability of the language. For example, the following code can easily generate multiple versions of a patch by changing the values of the variables that specify the dimensions (number of exciters, number of comb delays, number of allpass delays). In a procedural language like Csound or a ‘‘wire-up’’ environment like Max, a different patch would have to be created for different values for the dimensions of the patch.", "title": "" }, { "docid": "cd81ad1c571f9e9a80e2d09582b00f9a", "text": "OBJECTIVE\nThe biologic basis for gender identity is unknown. Research has shown that the ratio of the length of the second and fourth digits (2D:4D) in mammals is influenced by biologic sex in utero, but data on 2D:4D ratios in transgender individuals are scarce and contradictory. We investigated a possible association between 2D:4D ratio and gender identity in our transgender clinic population in Albany, New York.\n\n\nMETHODS\nWe prospectively recruited 118 transgender subjects undergoing hormonal therapy (50 female to male [FTM] and 68 male to female [MTF]) for finger length measurement. The control group consisted of 37 cisgender volunteers (18 females, 19 males). The length of the second and fourth digits were measured using digital calipers. The 2D:4D ratios were calculated and analyzed with unpaired t tests.\n\n\nRESULTS\nFTM subjects had a smaller dominant hand 2D:4D ratio (0.983 ± 0.027) compared to cisgender female controls (0.998 ± 0.021, P = .029), but a ratio similar to control males (0.972 ± 0.036, P =.19). There was no difference in the 2D:4D ratio of MTF subjects (0.978 ± 0.029) compared to cisgender male controls (0.972 ± 0.036, P = .434).\n\n\nCONCLUSION\nOur findings are consistent with a biologic basis for transgender identity and the possibilities that FTM gender identity is affected by prenatal androgen activity but that MTF transgender identity has a different basis.\n\n\nABBREVIATIONS\n2D:4D = 2nd digit to 4th digit; FTM = female to male; MTF = male to female.", "title": "" }, { "docid": "effdc359389fad7eb320120a6f3548d3", "text": "Wireless communication system is a heavy dense composition of signal processing techniques with semiconductor technologies. With the ever increasing system capacity and data rate, VLSI design and implementation method for wireless communications becomes more challenging, which urges researchers in signal processing to provide new architectures and efficient algorithms to meet low power and high performance requirements. This paper presents a survey of recent research, a development in VLSI architecture and signal processing algorithms with emphasis on wireless communication systems. It is shown that while contemporary signal processing can be directly applied to the communication hardware design including ASIC, SoC, and FPGA, much work remains to realize its full potential. It is concluded that an integrated combination of VLSI and signal processing technologies will provide more complete solutions.", "title": "" }, { "docid": "98ae5e9dda1be6e3c4eff68fc5ebbb4d", "text": "Recycling today constitutes the most environmentally friendly method of managing wood waste. A large proportion of the wood waste generated consists of used furniture and other constructed wooden items, which are composed mainly of particleboard, a material which can potentially be reused. In the current research, four different hydrothermal treatments were applied in order to recover wood particles from laboratory particleboards and use them in the production of new (recycled) ones. Quality was evaluated by determining the main properties of the original (control) and the recycled boards. Furthermore, the impact of a second recycling process on the properties of recycled particleboards was studied. With the exception of the modulus of elasticity in static bending, all of the mechanical properties of the recycled boards tested decreased in comparison with the control boards. Furthermore, the recycling process had an adverse effect on their hygroscopic properties and a beneficial effect on the formaldehyde content of the recycled boards. The results indicated that when the 1st and 2nd particleboard recycling processes were compared, it was the 2nd recycling process that caused the strongest deterioration in the quality of the recycled boards. Further research is needed in order to explain the causes of the recycled board quality falloff and also to determine the factors in the recycling process that influence the quality degradation of the recycled boards.", "title": "" } ]
scidocsrr
0d503425a6cfad63bab8ef9673adc50f
Differential privacy and robust statistics
[ { "docid": "7c449b9714d937dc6a3367a851130c4a", "text": "We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.", "title": "" } ]
[ { "docid": "52b55dab6e8a364bf0bcf05787e5b1ef", "text": "In adaptive learning systems for distance learning attention is focused on adjusting the learning material to the needs of the individual. Adaptive tests adjust to the current level of knowledge of the examinee and is specific for their needs, thus it is much better at evaluating the knowledge of each individual. The basic goal of adaptive computer tests is to ensure the examinee questions that are challenging enough for them but not too difficult, which would lead to frustration and confusion. The aim of this paper is to present a computer adaptive test (CAT) realized in MATLAB.", "title": "" }, { "docid": "933bc7cc6e1d56969f9d3fc0157f7ac9", "text": "This paper presents algorithms and techniques for single-sensor tracking and multi-sensor fusion of infrared and radar data. The results show that fusing radar data with infrared data considerably increases detection range, reliability and accuracy of the object tracking. This is mandatory for further development of driver assistance systems. Using multiple model filtering for sensor fusion applications helps to capture the dynamics of maneuvering objects while still achieving smooth object tracking for not maneuvering objects. This is important when safety and comfort systems have to make use of the same sensor information. Comfort systems generally require smoothly filtered data whereas for safety systems it is crucial to capture maneuvers of other road users as fast as possible. Multiple model filtering and probabilistic data association techniques are presented and all presented algorithms are tested in real-time on standard PC systems.", "title": "" }, { "docid": "c2da932aec6f3d8c6fddc9aaa994c9cd", "text": "As more companies embrace the concepts of sustainable development, there is a need to bring the ideas inherent in eco-efficiency and the \" triple-bottom line \" thinking down to a practical implementation level. Putting this concept into operation requires an understanding of the key indicators of sustainability and how they can be measured to determine if, in fact, progress is being made. Sustainability metrics are intended as simple yardsticks that are applicable across industry. The primary objective of this approach is to improve internal management decision-making with respect to the sustainability of processes, products and services. This approach can be used to make better decisions at any stage of the stage-gate process: from identification of an innovation to design to manufacturing and ultimately to exiting a business. More specifically, sustainability metrics can assist decision makers in setting goals, benchmarking, and comparing alternatives such as different suppliers, raw materials, and improvement options from the sustainability perspective. This paper provides a review on the early efforts and recent progress in the development of sustainability metrics. The experience of BRIDGES to Sustainability™, a not-for-profit organization, in testing, adapting, and refining the sustainability metrics are summarized. Basic and complementary metrics under six impact categories: material, energy, water, solid wastes, toxic release, and pollutant effects, are discussed. The development of BRIDGESworks™ Metrics, a metrics management software tool, is also presented. The software was designed to be both easy to use and flexible. It incorporates a base set of metrics and their heuristics for calculation, as well as a robust set of impact assessment data for use in identifying pollutant effects. While providing a metrics management starting point, the user has the option of creating other metrics defined by the user. The sustainability metrics work at BRIDGES to Sustainability™ was funded partially by the U.S. Department of Energy through a subcontract with the American Institute of Chemical Engineers and through corporate pilots.", "title": "" }, { "docid": "c638fe67f5d4b6e04a37e216edb849fa", "text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.", "title": "" }, { "docid": "8185da1a497e25f0c50e789847b6bd52", "text": "We address numerical versus experimental design and testing of miniature implantable antennas for biomedical telemetry in the medical implant communications service band (402-405 MHz). A model of a novel miniature antenna is initially proposed for skin implantation, which includes varying parameters to deal with fabrication-specific details. An iterative design-and-testing methodology is further suggested to determine the parameter values that minimize deviations between numerical and experimental results. To assist in vitro testing, a low-cost technique is proposed for reliably measuring the electric properties of liquids without requiring commercial equipment. Validation is performed within a specific prototype fabrication/testing approach for miniature antennas. To speed up design while providing an antenna for generic skin implantation, investigations are performed inside a canonical skin-tissue model. Resonance, radiation, and safety performance of the proposed antenna is finally evaluated inside an anatomical head model. This study provides valuable insight into the design of implantable antennas, assessing the significance of fabrication-specific details in numerical simulations and uncertainties in experimental testing for miniature structures. The proposed methodology can be applied to optimize antennas for several fabrication/testing approaches and biotelemetry applications.", "title": "" }, { "docid": "f738f79a9d516389e1ad0c7343d525c4", "text": "The I-V curves for Schottky diodes with two different contact areas and geometries fabricated through 1.2 μm CMOS process are presented. These curves are described applying the analysis and practical layout design. It takes into account the resistance, capacitance and reverse breakdown voltage in the semiconductor structure and the dependence of these parameters to improve its operation. The described diodes are used for a charge pump circuit implementation.", "title": "" }, { "docid": "ae9fb1b7ff6821dd29945f768426d7fc", "text": "Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients.", "title": "" }, { "docid": "0ce7465e40b3b13e5c316fb420a766d9", "text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.", "title": "" }, { "docid": "4264c3ed6ea24a896377a7efa2b425b0", "text": "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration. Lei Tang, Xufei Wang, Huan Liu Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: {L.Tang, Xufei.Wang, Huan.Liu@asu.edu} Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Community Detection in Multi-Dimensional Networks 5a. CONTRACT NUMBER", "title": "" }, { "docid": "cac7822c1a40b406c998449e2664815f", "text": "This paper demonstrates the possibility and feasibility of an ultralow-cost antenna-in-package (AiP) solution for the upcoming generation of wireless local area networks (WLANs) denoted as IEEE802.11ad. The iterative design procedure focuses on maximally alleviating the inherent disadvantages of high-volume FR4 process at 60 GHz such as its relatively high material loss and fabrication restrictions. Within the planar antenna package, the antenna element, vertical transition, antenna feedline, and low- and high-speed interfaces are allocated in a vertical schematic. A circular stacked patch antenna renders the antenna package to exhibit 10-dB return loss bandwidth from 57-66 GHz. An embedded coplanar waveguide (CPW) topology is adopted for the antenna feedline and features less than 0.24 dB/mm in unit loss, which is extracted from measured parametric studies. The fabricated single antenna package is 9 mm × 6 mm × 0.404 mm in dimension. A multiple-element antenna package is fabricated, and its feasibility for future phase array applications is studied. Far-field radiation measurement using an inhouse radio-frequency (RF) probe station validates the single-antenna package to exhibit more than 4.1-dBi gain and 76% radiation efficiency.", "title": "" }, { "docid": "adf1cfe981d965f95e783e1f4ed5fc5d", "text": "PAST research has shown that real-time Twitter data can be used to predict market movement of securities and other financial instruments [1]. The goal of this paper is to prove whether Twitter data relating to cryptocurrencies can be utilized to develop advantageous crypto coin trading strategies. By way of supervised machine learning techniques, our team will outline several machine learning pipelines with the objective of identifying cryptocurrency market movement. The prominent alternative currency examined in this paper is Bitcoin (BTC). Our approach to cleaning data and applying supervised learning algorithms such as logistic regression, Naive Bayes, and support vector machines leads to a final hour-to-hour and day-to-day prediction accuracy exceeding 90%. In order to achieve this result, rigorous error analysis is employed in order to ensure that accurate inputs are utilized at each step of the model. This analysis yields a 25% accuracy increase on average.", "title": "" }, { "docid": "b0741999659724f8fa5dc1117ec86f0d", "text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.", "title": "" }, { "docid": "50d22974ef09d0f02ee05d345e434055", "text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.", "title": "" }, { "docid": "6f709e89edaa619f41335b1a06eb713a", "text": "Graphene patch microstrip antenna has been investigated for 600 GHz applications. The graphene material introduces a reconfigurable surface conductivity in terahertz frequency band. The input impedance is calculated using the finite integral technique. A five-lumped elements equivalent circuit for graphene patch microstrip antenna has been investigated. The values of the lumped elements equivalent circuit are optimized using the particle swarm optimization techniques. The optimization is performed to minimize the mean square error between the input impedance of the finite integral technique and that calculated by the equivalent circuit model. The effect of varying the graphene material chemical potential and relaxation time on the radiation characteristics of the graphene patch microstrip antenna has been investigated. An improved new equivalent circuit model has been introduced to best fitting the input impedance using a rational function and PSO. The Cauer's realization method is used to synthesize a new lumped-elements equivalent circuits.", "title": "" }, { "docid": "d73d16ff470669b4935e85e2de815cb8", "text": "As organizations aggressively deploy radio frequency identification systems, activists are increasingly concerned about RFID's potential to invade user privacy. This overview highlights potential threats and how they might be addressed using both technology and public policy.", "title": "" }, { "docid": "a9399439831a970fcce8e0101696325f", "text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.", "title": "" }, { "docid": "ffdfd49ad9216806ab6a6bf156bb5a87", "text": "PDDL+ is an extension of PDDL that enables modelling planning domains with mixed discrete-continuous dynamics. In this paper we present a new approach to PDDL+ planning based on Constraint Answer Set Programming (CASP), i.e. ASP rules plus numerical constraints. To the best of our knowledge, ours is the first attempt to link PDDL+ planning and logic programming. We provide an encoding of PDDL+ models into CASP problems. The encoding can handle non-linear hybrid domains, and represents a solid basis for applying logic programming to PDDL+ planning. As a case study, we consider the EZCSP CASP solver and obtain promising results on a set of PDDL+ benchmark problems.", "title": "" }, { "docid": "d5a816dd44d4d95b0d281880f1917831", "text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.", "title": "" }, { "docid": "26a9fb64389a5dbbbd8afdc6af0b6f07", "text": "specifications of the essential structure of a system. Models in the analysis or preliminary design stages focus on the key concepts and mechanisms of the eventual system. They correspond in certain ways with the final system. But details are missing from the model, which must be added explicitly during the design process. The purpose of the abstract models is to get the high-level pervasive issues correct before tackling the more localized details. These models are intended to be evolved into the final models by a careful process that guarantees that the final system correctly implements the intent of the earlier models. There must be traceability from these essential models to the full models; otherwise, there is no assurance that the final system correctly incorporates the key properties that the essential model sought to show. Essential models focus on semantic intent. They do not need the full range of implementation options. Indeed, low-level performance distinctions often obscure the logical semantics. The path from an essential model to a complete implementation model must be clear and straightforward, however, whether it is generated automatically by a code generator or evolved manually by a designer. Full specifications of a final system. An implementation model includes enough information to build the system. It must include not only the logical semantics of the system and the algorithms, data structures, and mechanisms that ensure proper performance, but also organizational decisions about the system artifacts that are necessary for cooperative work by humans and processing by tools. This kind of model must include constructs for packaging the model for human understanding and for computer convenience. These are not properties of the target application itself. Rather, they are properties of the construction process. Exemplars of typical or possible systems. Well-chosen examples can give insight to humans and can validate system specifications and implementations. Even a large Chapter 2 • The Nature and Purpose of Models 17 collection of examples, however, necessarily falls short of a definitive description. Ultimately, we need models that specify the general case; that is what a program is, after all. Examples of typical data structures, interaction sequences, or object histories can help a human trying to understand a complicated situation, however. Examples must be used with some care. It is logically impossible to induce the general case from a set of examples, but well-chosen prototypes are the way most people think. An example model includes instances rather than general descriptors. It therefore tends to have a different feel than a generic descriptive model. Example models usually use only a subset of the UML constructs, those that deal with instances. Both descriptive models and exemplar models are useful in modeling a system. Complete or partial descriptions of systems. A model can be a complete description of a single system with no outside references. More often, it is organized as a set of distinct, discrete units, each of which may be stored and manipulated separately as a part of the entire description. Such models have “loose ends” that must be bound to other models in a complete system. Because the pieces have coherence and meaning, they can be combined with other pieces in various ways to produce many different systems. Achieving reuse is an important goal of good modeling. Models evolve over time. Models with greater degrees of detail are derived from more abstract models, and more concrete models are derived from more logical models. For example, a model might start as a high-level view of the entire system, with a few key services in brief detail and no embellishments. Over time, much more detail is added and variations are introduced. Also over time, the focus shifts from a front-end, user-centered logical view to a back-end, implementationcentered physical view. As the developers work with a system and understand it better, the model must be iterated at all levels to capture that understanding; it is impossible to understand a large system in a single, linear pass. There is no one “right” form for a model.", "title": "" }, { "docid": "1256f0799ed585092e60b50fb41055be", "text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.", "title": "" } ]
scidocsrr
a67fee9575a077eeb977700728c86da6
Combining monoSLAM with object recognition for scene augmentation using a wearable camera
[ { "docid": "2aefddf5e19601c8338f852811cebdee", "text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.", "title": "" } ]
[ { "docid": "088011257e741b8d08a3b44978134830", "text": "This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.", "title": "" }, { "docid": "8868fe4e0907fc20cc6cbc2b01456707", "text": "Tracking multiple objects is a challenging task when objects move in groups and occlude each other. Existing methods have investigated the problems of group division and group energy-minimization; however, lacking overall objectgroup topology modeling limits their ability in handling complex object and group dynamics. Inspired with the social affinity property of moving objects, we propose a Graphical Social Topology (GST) model, which estimates the group dynamics by jointly modeling the group structure and the states of objects using a topological representation. With such topology representation, moving objects are not only assigned to groups, but also dynamically connected with each other, which enables in-group individuals to be correctly associated and the cohesion of each group to be precisely modeled. Using well-designed topology learning modules and topology training, we infer the birth/death and merging/splitting of dynamic groups. With the GST model, the proposed multi-object tracker can naturally facilitate the occlusion problem by treating the occluded object and other in-group members as a whole while leveraging overall state transition. Experiments on both RGB and RGB-D datasets confirm that the proposed multi-object tracker improves the state-of-the-arts especially in crowded scenes.", "title": "" }, { "docid": "04853b59abf86a0dd19fdaac09c9a6c4", "text": "A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set.", "title": "" }, { "docid": "ef5f170bef5daf0800e473554d67fa86", "text": "Morphological segmentation of words is a subproblem of many natural language tasks, including handling out-of-vocabulary (OOV) words in machine translation, more effective information retrieval, and computer assisted vocabulary learning. Previous work typically relies on extensive statistical and semantic analyses to induce legitimate stems and affixes. We introduce a new learning based method and a prototype implementation of a knowledge light system for learning to segment a given word into word parts, including prefixes, suffixes, stems, and even roots. The method is based on the Conditional Random Fields (CRF) model. Evaluation results show that our method with a small set of seed training data and readily available resources can produce fine-grained morphological segmentation results that rival previous work and systems.", "title": "" }, { "docid": "1c11472572758b6f831349ebf6443ad5", "text": "In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.", "title": "" }, { "docid": "9e591fe1c8bf7a6a3bc4f31d70c9a94f", "text": "Uploading data streams to a resource-rich cloud server for inner product evaluation, an essential building block in many popular stream applications (e.g., statistical monitoring), is appealing to many companies and individuals. On the other hand, verifying the result of the remote computation plays a crucial role in addressing the issue of trust. Since the outsourced data collection likely comes from multiple data sources, it is desired for the system to be able to pinpoint the originator of errors by allotting each data source a unique secret key, which requires the inner product verification to be performed under any two parties’ different keys. However, the present solutions either depend on a single key assumption or powerful yet practically-inefficient fully homomorphic cryptosystems. In this paper, we focus on the more challenging multi-key scenario where data streams are uploaded by multiple data sources with distinct keys. We first present a novel homomorphic verifiable tag technique to publicly verify the outsourced inner product computation on the dynamic data streams, and then extend it to support the verification of matrix product computation. We prove the security of our scheme in the random oracle model. Moreover, the experimental result also shows the practicability of our design.", "title": "" }, { "docid": "0db28b5ec56259c8f92f6cc04d4c2601", "text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d8cf2b75936a7c4d5878c3c17ac89074", "text": "A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex.", "title": "" }, { "docid": "b2379dc57ea6ec09400a3e34e79a8d0d", "text": "We propose that a robot speaks a Hanamogera (a semantic-free speech) when the robot speaks with a person. Hanamogera is semantic-free speech and the sound of the speech is a sound of the words which consists of phonogram characters. The consisted characters can be changed freely because the Hanamogera speech does not have to have any meaning. Each sound of characters of a Hanamogera is thought to have an impression according to the contained consonant/vowel in the characters. The Hanamogera is expected to make a listener feel that the talking with a robot which speaks a Hanamogera is fun because of a sound of the Hanamogera. We conducted an experiment of talking with a NAO and an experiment of evaluating to Hanamogera speeches. The results of the experiment showed that a talking with a Hanamogera speech robot was better fun than a talking with a nodding robot.", "title": "" }, { "docid": "6f0283efa932663c83cc2c63d19fd6cf", "text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.", "title": "" }, { "docid": "21f56bb6edbef3448275a0925bd54b3a", "text": "Dr. Stephanie L. Cincotta (Psychiatry): A 35-year-old woman was seen in the emergency department of this hospital because of a pruritic rash. The patient had a history of hepatitis C virus (HCV) infection, acne, depression, and drug dependency. She had been in her usual health until 2 weeks before this presentation, when insomnia developed, which she attributed to her loss of a prescription for zolpidem. During the 10 days before this presentation, she reported seeing white “granular balls,” which she thought were mites or larvae, emerging from and crawling on her skin, sheets, and clothing and in her feces, apartment, and car, as well as having an associated pruritic rash. She was seen by her physician, who referred her to a dermatologist for consideration of other possible causes of the persistent rash, such as porphyria cutanea tarda, which is associated with HCV infection. Three days before this presentation, the patient ran out of clonazepam (after an undefined period during which she reportedly took more than the prescribed dose) and had increasing anxiety and insomnia. The same day, she reported seeing “bugs” on her 15-month-old son that were emerging from his scalp and were present on his skin and in his diaper and sputum. The patient scratched her skin and her child’s skin to remove the offending agents. The day before this presentation, she called emergency medical services and she and her child were transported by ambulance to the emergency department of another hospital. A diagnosis of possible cheyletiellosis was made. She was advised to use selenium sulfide shampoo and to follow up with her physician; the patient returned home with her child. On the morning of admission, while bathing her child, she noted that his scalp was turning red and he was crying. She came with her son to the emergency department of this hospital. The patient reported the presence of bugs on her skin, which she attempted to point out to examiners. She acknowledged a habit of picking at her skin since adolescence, which she said had a calming effect. Fourteen months earlier, shortly after the birth of her son, worsening acne developed that did not respond to treatment with topical antimicrobial agents and tretinoin. Four months later, a facial abscess due From the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Massachusetts General Hospital, and the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Harvard Medi‐ cal School — both in Boston.", "title": "" }, { "docid": "d4ab2085eec138f99d4d490b0cbf9e3a", "text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.", "title": "" }, { "docid": "c52d31c7ae39d1a7df04140e920a26d2", "text": "In the past half-decade, Amazon Mechanical Turk has radically changed the way many scholars do research. The availability of a massive, distributed, anonymous crowd of individuals willing to perform general human-intelligence micro-tasks for micro-payments is a valuable resource for researchers and practitioners. This paper addresses the challenges of obtaining quality annotations for subjective judgment oriented tasks of varying difficulty. We design and conduct a large, controlled experiment (N=68,000) to measure the efficacy of selected strategies for obtaining high quality data annotations from non-experts. Our results point to the advantages of person-oriented strategies over process-oriented strategies. Specifically, we find that screening workers for requisite cognitive aptitudes and providing training in qualitative coding techniques is quite effective, significantly outperforming control and baseline conditions. Interestingly, such strategies can improve coder annotation accuracy above and beyond common benchmark strategies such as Bayesian Truth Serum (BTS).", "title": "" }, { "docid": "f8bc67d88bdd9409e2f3dfdc89f6d93c", "text": "A millimeter-wave CMOS on-chip stacked Marchand balun is presented in this paper. The balun is fabricated using a top pad metal layer as the single-ended port and is stacked above two metal conductors at the next highest metal layer in order to achieve sufficient coupling to function as the differential ports. Strip metal shields are placed underneath the structure to reduce substrate losses. An amplitude imbalance of 0.5 dB is measured with attenuations below 6.5 dB at the differential output ports at 30 GHz. The corresponding phase imbalance is below 5 degrees. The area occupied is 229μm × 229μm.", "title": "" }, { "docid": "3ae3e7f38be2f2d989dde298a64d9ba4", "text": "A number of compilers exploit the following strategy: translate a term to continuation-passing style (CPS) and optimize the resulting term using a sequence of reductions. Recent work suggests that an alternative strategy is superior: optimize directly in an extended source calculus. We suggest that the appropriate relation between the source and target calculi may be captured by a special case of a Galois connection known as a reflection. Previous work has focused on the weaker notion of an equational correspondence, which is based on equality rather than reduction. We show that Moggi's monad translation and Plotkin's CPS translation can both be regarded as reflections, and thereby strengthen a number of results in the literature.", "title": "" }, { "docid": "b0687c84e408f3db46aa9fba6f9eeeb9", "text": "Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results.", "title": "" }, { "docid": "138ee58ce9d2bcfa14b44642cf9af08b", "text": "This research is a partial test of Park et al.’s (2008) model to assess the impact of flow and brand equity in 3D virtual worlds. It draws on flow theory as its main theoretical foundation to understand and empirically assess the impact of flow on brand equity and behavioral intention in 3D virtual worlds. The findings suggest that the balance of skills and challenges in 3D virtual worlds influences users’ flow experience, which in turn influences brand equity. Brand equity then increases behavioral intention. The authors also found that the impact of flow on behavioral intention in 3D virtual worlds is indirect because the relationship between them is mediated by brand equity. This research highlights the importance of balancing the challenges posed by 3D virtual world branding sites with the users’ skills to maximize their flow experience and brand equity to increase the behavioral intention associated with the brand.", "title": "" }, { "docid": "1c1775a64703f7276e4843b8afc26117", "text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.", "title": "" }, { "docid": "3c530cf20819fe98a1fb2d1ab44dd705", "text": "This paper presents a novel representation for three-dimensional objects in terms of affine-invariant image patches and their spatial relationships. Multi-view co nstraints associated with groups of patches are combined wit h a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true three-dimensional affine and Euclidean models from multiple images and their recognition in a single photograp h taken from an arbitrary viewpoint. The proposed approach does not require a separate segmentation stage and is applicable to cluttered scenes. Preliminary modeling and recognition results are presented.", "title": "" }, { "docid": "162f080444935117c5125ae8b7c3d51e", "text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1", "title": "" } ]
scidocsrr
03bfcb704a6678551c30cf2c18a79645
Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets.
[ { "docid": "2871de581ee0efe242438567ca3a57dd", "text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.", "title": "" } ]
[ { "docid": "7f5bc34cd08a09014cff1b07c2cf72d0", "text": "This paper presents the RF telecommunications system designed for the New Horizons mission, NASA’s planned mission to Pluto, with focus on new technologies developed to meet mission requirements. These technologies include an advanced digital receiver — a mission-enabler for its low DC power consumption at 2.3 W secondary power. The receiver is one-half of a card-based transceiver that is incorporated with other spacecraft functions into an integrated electronics module, providing further reductions in mass and power. Other developments include extending APL’s long and successful flight history in ultrastable oscillators (USOs) with an updated design for lower DC power. These USOs offer frequency stabilities to 1 part in 10, stabilities necessary to support New Horizons’ uplink radio science experiment. In antennas, the 2.1 meter high gain antenna makes use of shaped suband main reflectors to improve system performance and achieve a gain approaching 44 dBic. New Horizons would also be the first deep-space mission to fly a regenerative ranging system, offering up to a 30 dB performance improvement over sequential ranging, especially at long ranges. The paper will provide an overview of the current system design and development and performance details on the new technologies mentioned above. Other elements of the telecommunications system will also be discussed. Note: New Horizons is NASA’s planned mission to Pluto, and has not been approved for launch. All representations made in this paper are contingent on a decision by NASA to go forward with the preparation for and launch of the mission.", "title": "" }, { "docid": "19ad4b01b9e55995ea85e72b9fa100bd", "text": "This paper describes the integration of the Alice 3D virtual worlds environment into many disciplines in elementary school, middle school and high school. We have developed a wide range of Alice instructional materials including tutorials for both computer science concepts and animation concepts. To encourage the building of more complicated worlds, we have developed template Alice classes and worlds. With our materials, teachers and students are exposed to computing concepts while using Alice to create projects, stories, games and quizzes. These materials were successfully used in the summers 2008 and 2009 in training and working with over 130 teachers.", "title": "" }, { "docid": "1d949b64320fce803048b981ae32ce38", "text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.", "title": "" }, { "docid": "fe31348bce3e6e698e26aceb8e99b2d8", "text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.", "title": "" }, { "docid": "47e06f5c195d2e1ecb6199b99ef1ee2d", "text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.", "title": "" }, { "docid": "0957b0617894561ea6d6e85c43cfb933", "text": "We consider the online metric matching problem. In this prob lem, we are given a graph with edge weights satisfying the triangl e inequality, andk vertices that are designated as the right side of the matchin g. Over time up tok requests arrive at an arbitrary subset of vertices in the gra ph and each vertex must be matched to a right side vertex immediately upon arrival. A vertex cannot be rematched to another vertex once it is matched. The goal is to minimize the total weight of the matching. We give aO(log k) competitive randomized algorithm for the problem. This improves upon the best known guarantee of O(log k) due to Meyerson, Nanavati and Poplawski [19]. It is well known that no deterministic al gorithm can have a competitive less than 2k − 1, and that no randomized algorithm can have a competitive ratio of less than l k.", "title": "" }, { "docid": "c48d3a5d1cf7065de41bf3acfe5f9d0b", "text": "In this work we perform experiments with the recently published work on Capsule Networks. Capsule Networks have been shown to deliver state of the art performance for MNIST and claim to have greater discriminative power than Convolutional Neural Networks for special tasks, such as recognizing overlapping digits. The authors of Capsule Networks have evaluated datasets with low number of categories, viz. MNIST, CIFAR-10, SVHN among others. We evaluate capsule networks on two datasets viz. Traffic Signals, Food101, and CIFAR10 with less number of iterations, making changes to the architecture to account for RGB images. Traditional techniques like dropout, batch normalization were applied to capsule networks for performance evaluation.", "title": "" }, { "docid": "923a714ed2811e29647870a2694698b1", "text": "Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets", "title": "" }, { "docid": "fe62f8473bed5b26b220874ef448e912", "text": "Dual stripline routing is more and more widely used in the modern high speed PCB design due to its cost advantage of reduced overall layer count. However, the major challenge of a successful dual stripline design is to handle the additional interferences introduced by the signals on adjacent layers. This paper studies the crosstalk effect of the dual stripline with both parallel and angled routing, and proposes design solutions to tackle the challenge. Analytical and empirical algorithms are proposed to estimate the crosstalk waveforms from multiple aggressors, which provide quick design risk assessment, and the waveform is well correlated to the 3D full wave EM simulation results.", "title": "" }, { "docid": "d33b5c031cf44d3b7a95ca5b0335f91c", "text": "Straightforward application of Deep Belief Nets (DBNs) to acoustic modeling produces a rich distributed representation of speech data that is useful for recognition and yields impressive results on the speaker-independent TIMIT phone recognition task. However, the first-layer Gaussian-Bernoulli Restricted Boltzmann Machine (GRBM) has an important limitation, shared with mixtures of diagonalcovariance Gaussians: GRBMs treat different components of the acoustic input vector as conditionally independent given the hidden state. The mean-covariance restricted Boltzmann machine (mcRBM), first introduced for modeling natural images, is a much more representationally efficient and powerful way of modeling the covariance structure of speech data. Every configuration of the precision units of the mcRBM specifies a different precision matrix for the conditional distribution over the acoustic space. In this work, we use the mcRBM to learn features of speech data that serve as input into a standard DBN. The mcRBM features combined with DBNs allow us to achieve a phone error rate of 20.5%, which is superior to all published results on speaker-independent TIMIT to date.", "title": "" }, { "docid": "051c530bf9d49bf1066ddf856488dff1", "text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.", "title": "" }, { "docid": "06f8b713ed4020c99403c28cbd1befbc", "text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.", "title": "" }, { "docid": "660ed094efb11b7d39ecfd5b6f2cfc19", "text": "Protocol reverse engineering has often been a manual process that is considered time-consuming, tedious and error-prone. To address this limitation, a number of solutions have recently been proposed to allow for automatic protocol reverse engineering. Unfortunately, they are either limited in extracting protocol fields due to lack of program semantics in network traces or primitive in only revealing the flat structure of protocol format. In this paper, we present a system called AutoFormat that aims at not only extracting protocol fields with high accuracy, but also revealing the inherently “non-flat”, hierarchical structures of protocol messages. AutoFormat is based on the key insight that different protocol fields in the same message are typically handled in different execution contexts (e.g., the runtime call stack). As such, by monitoring the program execution, we can collect the execution context information for every message byte (annotated with its offset in the entire message) and cluster them to derive the protocol format. We have evaluated our system with more than 30 protocol messages from seven protocols, including two text-based protocols (HTTP and SIP), three binary-based protocols (DHCP, RIP, and OSPF), one hybrid protocol (CIFS/SMB), as well as one unknown protocol used by a real-world malware. Our results show that AutoFormat can not only identify individual message fields automatically and with high accuracy (an average 93.4% match ratio compared with Wireshark), but also unveil the structure of the protocol format by revealing possible relations (e.g., sequential, parallel, and hierarchical) among the message fields. Part of this research has been supported by the National Science Foundation under grants CNS-0716376 and CNS-0716444. The bulk of this work was performed when the first author was visiting George Mason University in Summer 2007.", "title": "" }, { "docid": "201f576423ed88ee97d1505b6d5a4d3f", "text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.", "title": "" }, { "docid": "61c6c9f6a0f60333ad2997b15646a096", "text": "The density of neustonic plastic particles was compared to that of zooplankton in the coastal ocean near Long Beach, California. Two trawl surveys were conducted, one after an extended dry period when there was little land-based runoff, the second shortly after a storm when runoff was extensive. On each survey, neuston samples were collected at five sites along a transect parallel to shore using a manta trawl lined with 333 micro mesh. Average plastic density during the study was 8 pieces per cubic meter, though density after the storm was seven times that prior to the storm. The mass of plastics was also higher after the storm, though the storm effect on mass was less than it was for density, reflecting a smaller average size of plastic particles after the storm. The average mass of plastic was two and a half times greater than that of plankton, and even greater after the storm. The spatial pattern of the ratio also differed before and after a storm. Before the storm, greatest plastic to plankton ratios were observed at two stations closest to shore, whereas after the storm these had the lowest ratios.", "title": "" }, { "docid": "375b2025d7523234bb10f5f16b2b0764", "text": "In this paper, we present a system including a novel component called programmable aperture and two associated post-processing algorithms for high-quality light field acquisition. The shape of the programmable aperture can be adjusted and used to capture light field at full sensor resolution through multiple exposures without any additional optics and without moving the camera. High acquisition efficiency is achieved by employing an optimal multiplexing scheme, and quality data is obtained by using the two post-processing algorithms designed for self calibration of photometric distortion and for multi-view depth estimation. View-dependent depth maps thus generated help boost the angular resolution of light field. Various post-exposure photographic effects are given to demonstrate the effectiveness of the system and the quality of the captured light field.", "title": "" }, { "docid": "4bf253b2349978d17fd9c2400df61d21", "text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.", "title": "" }, { "docid": "cb65229a1edd5fc6dc5cf6be7afc1b9e", "text": "This session studies specific challenges that Machine Learning (ML) algorithms have to tackle when faced with Big Data problems. These challenges can arise when any of the dimensions in a ML problem grows significantly: a) size of training set, b) size of test set or c) dimensionality. The studies included in this edition explore the extension of previous ML algorithms and practices to Big Data scenarios. Namely, specific algorithms for recurrent neural network training, ensemble learning, anomaly detection and clustering are proposed. The results obtained show that this new trend of ML problems presents both a challenge and an opportunity to obtain results which could allow ML to be integrated in many new applications in years to come.", "title": "" }, { "docid": "e29d3ab3d3b9bd6cbff1c2a79a6c3070", "text": "This paper presents a study of passive Dickson based envelope detectors operating in the quadratic small signal regime, specifically intended to be used in RF front end of sensing units of IoE sensor nodes. Critical parameters such as open-circuit voltage sensitivity (OCVS), charge time, input impedance, and output noise are studied and simplified circuit models are proposed to predict the behavior of the detector, resulting in practical design intuitions. There is strong agreement between model predictions, simulation results and measurements of 15 representative test structures that were fabricated in a 130 nm RF CMOS process.", "title": "" }, { "docid": "510b9b709d8bd40834ed0409d1e83d4d", "text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.", "title": "" } ]
scidocsrr
01adc0efe604be82d0916c55a9044287
The Latent Relation Mapping Engine: Algorithm and Experiments
[ { "docid": "80db4fa970d0999a43d31d58e23444bb", "text": "There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.", "title": "" } ]
[ { "docid": "fb4837a619a6b9e49ca2de944ec2314e", "text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.", "title": "" }, { "docid": "b49275c9f454cdb0061e0180ac50a04f", "text": "Implementing controls in the car becomes a major challenge: The use of simple physical buttons does not scale to the increased number of assistive, comfort, and infotainment functions. Current solutions include hierarchical menus and multi-functional control devices, which increase complexity and visual demand. Another option is speech control, which is not widely accepted, as it does not support visibility of actions, fine-grained feedback, and easy undo of actions. Our approach combines speech and gestures. By using speech for identification of functions, we exploit the visibility of objects in the car (e.g., mirror) and simple access to a wide range of functions equaling a very broad menu. Using gestures for manipulation (e.g., left/right), we provide fine-grained control with immediate feedback and easy undo of actions. In a user-centered process, we determined a set of user-defined gestures as well as common voice commands. For a prototype, we linked this to a car interior and driving simulator. In a study with 16 participants, we explored the impact of this form of multimodal interaction on the driving performance against a baseline using physical buttons. The results indicate that the use of speech and gesture is slower than using buttons but results in a similar driving performance. Users comment in a DALI questionnaire that the visual demand is lower when using speech and gestures.", "title": "" }, { "docid": "086f7cf2643450959d575562a67e3576", "text": "Single image super resolution (SISR) is to reconstruct a high resolution image from a single low resolution image. The SISR task has been a very attractive research topic over the last two decades. In recent years, convolutional neural network (CNN) based models have achieved great performance on SISR task. Despite the breakthroughs achieved by using CNN models, there are still some problems remaining unsolved, such as how to recover high frequency details of high resolution images. Previous CNN based models always use a pixel wise loss, such as l2 loss. Although the high resolution images constructed by these models have high peak signal-to-noise ratio (PSNR), they often tend to be blurry and lack high-frequency details, especially at a large scaling factor. In this paper, we build a super resolution perceptual generative adversarial network (SRPGAN) framework for SISR tasks. In the framework, we propose a robust perceptual loss based on the discriminator of the built SRPGAN model. We use the Charbonnier loss function to build the content loss and combine it with the proposed perceptual loss and the adversarial loss. Compared with other state-of-the-art methods, our method has demonstrated great ability to construct images with sharp edges and rich details. We also evaluate our method on different benchmarks and compare it with previous CNN based methods. The results show that our method can achieve much higher structural similarity index (SSIM) scores on most of the benchmarks than the previous state-of-art methods.", "title": "" }, { "docid": "ba87ca7a07065e25593e6ae5c173669d", "text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.", "title": "" }, { "docid": "57290d8e0a236205c4f0ce887ffed3ab", "text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.", "title": "" }, { "docid": "f7c4b71b970b7527cd2650ce1e05ab1b", "text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.", "title": "" }, { "docid": "7182dfe75bc09df526da51cd5c8c8d20", "text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.", "title": "" }, { "docid": "2af4d946d00b37ec0f6d37372c85044b", "text": "Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe, 2016).", "title": "" }, { "docid": "912c213d76bed8d90f636ea5a6220cf1", "text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.", "title": "" }, { "docid": "3a98dd611afcfd6d51c319bde3b84cc9", "text": "This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/3, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.", "title": "" }, { "docid": "2f5776d8ce9714dcee8d458b83072f74", "text": "The componential theory of creativity is a comprehensive model of the social and psychological components necessary for an individual to produce creative work. The theory is grounded in a definition of creativity as the production of ideas or outcomes that are both novel and appropriate to some goal. In this theory, four components are necessary for any creative response: three components within the individual – domainrelevant skills, creativity-relevant processes, and intrinsic task motivation – and one component outside the individual – the social environment in which the individual is working. The current version of the theory encompasses organizational creativity and innovation, carrying implications for the work environments created by managers. This entry defines the components of creativity and how they influence the creative process, describing modifications to the theory over time. Then, after comparing the componential theory to other creativity theories, the article describes this theory’s evolution and impact.", "title": "" }, { "docid": "c3566171b68e4025931a72064e74e4ae", "text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.", "title": "" }, { "docid": "56c42f370442a5ec485e9f1d719d7141", "text": "The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web. Page importance or page rank is defined as the fixpoint of a matrix equation. Previous algorithms compute it off-line and require the use of a lot of extra CPU as well as disk resources in particular to store and maintain the link matrix of the web. We briefly discuss a new algorithm that works on-line, and uses much less resources. In particular, it does not require storing the link matrix. It is on-line in that it continuously refines its estimate of page importance while the web/graph is visited. When the web changes, page importance changes as well. We modify the algorithm so that it adapts dynamically to changes of the web. We report on experiments on web data and on synthetic data.", "title": "" }, { "docid": "bb444221c5a8eefad3e2a9a175bfccbc", "text": "This paper presents new experimental results of angle of arrival (AoA) measurements for localizing passive RFID tags in the UHF frequency range. The localization system is based on the principle of a phased array with electronic beam steering mechanism. This approach has been successfully applied within a UHF RFID system and it allows the precise determination of the angle and the position of small passive RFID tags. The paper explains the basic principle, the experimental setup with the phased array and shows results of the measurements.", "title": "" }, { "docid": "b71197073ea33bb8c61973e8cd7d2775", "text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.", "title": "" }, { "docid": "6a5e0e30eb5b7f2efe76e0e58e04ae4a", "text": "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call “percepts” using Gated-Recurrent-Unit Recurrent Networks (GRUs). Our method relies on percepts that are extracted from all levels of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts, however, can lead to high-dimensionality video representations. To mitigate this effect and control the number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler caption-decoder model and without extra 3D CNN features.", "title": "" }, { "docid": "a5f557ddac63cd24a11c1490e0b4f6d4", "text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.", "title": "" }, { "docid": "da5562859bfed0057e0566679a4aca3d", "text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.", "title": "" }, { "docid": "d3eff4c249e464e9e571d80d4fe95bbd", "text": "CONIKS is a proposed key transparency system which enables a centralized service provider to maintain an auditable yet privacypreserving directory of users’ public keys. In the original CONIKS design, users must monitor that their data is correctly included in every published snapshot of the directory, necessitating either slow updates or trust in an unspecified third-party to audit that the data structure has stayed consistent. We demonstrate that the data structures for CONIKS are very similar to those used in Ethereum, a consensus computation platform with a Turing-complete programming environment. We can take advantage of this to embed the core CONIKS data structures into an Ethereum contract with only minor modifications. Users may then trust the Ethereum network to audit the data structure for consistency and non-equivocation. Users who do not trust (or are unaware of) Ethereum can self-audit the CONIKS data structure as before. We have implemented a prototype contract for our hybrid EthIKS scheme, demonstrating that it adds only modest bandwidth overhead to CONIKS proofs and costs hundredths of pennies per key update in fees at today’s rates.", "title": "" }, { "docid": "8921cffb633b0ea350b88a57ef0d4437", "text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.", "title": "" } ]
scidocsrr
01eb7e40fc907559056c1c5eb1c04c12
Data Mining Model for Predicting Student Enrolment in STEM Courses in Higher Education Institutions
[ { "docid": "f7a36f939cbe9b1d403625c171491837", "text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.", "title": "" }, { "docid": "055faaaa14959a204ca19a4962f6e822", "text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom", "title": "" }, { "docid": "120452d49d476366abcb52b86d8110b5", "text": "Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naïve Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.", "title": "" } ]
[ { "docid": "a7317f06cf34e501cb169bdf805e7e34", "text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.", "title": "" }, { "docid": "64139426292bc1744904a0758b6caed1", "text": "The quantity and complexity of available information is rapidly increasing. This potential information overload challenges the standard information retrieval models, as users find it increasingly difficult to find relevant information. We therefore propose a method that can utilize the potentially valuable knowledge contained in concept models such as ontologies, and thereby assist users in querying, using the terminology of the domain. The primary focus of this dissertation is similarity measures for use in ontology-based information retrieval. We aim at incorporating the information contained in ontologies by choosing a representation formalism where queries and objects in the information base are described using a lattice-algebraic concept language containing expressions that can be directly mapped into the ontology. Similarity between the description of the query and descriptions of the objects is calculated based on a nearness principle derived from the structure and relations of the ontology. This measure is then used to perform ontology-based query expansion. By doing so, we can replace semantic matching from direct reasoning over the ontology with numerical similarity calculation by means of a general aggregation principle The choice of the proposed similarity measure is guided by a set of properties aimed at ensuring the measures accordance with a set of distinctive structural qualities derived from the ontology. We furthermore empirically evaluate the proposed similarity measure by comparing the similarity ratings for pairs of concepts produced by the proposed measure, with the mean similarity ratings produced by humans for the same pairs.", "title": "" }, { "docid": "710e81da55d50271b55ac9a4f2d7f986", "text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bb314530c796fbec6679a4a0cc6cd105", "text": "The undergraduate computer science curriculum is generally focused on skills and tools; most students are not exposed to much research in the field, and do not learn how to navigate the research literature. We describe how science fiction reviews were used as a gateway to research reviews. Students learn a little about current or recent research on a topic that stirs their imagination, and learn how to search for, read critically, and compare technical papers on a topic related their chosen science fiction book, movie, or TV show.", "title": "" }, { "docid": "371dad2a860f7106f10fd1f204afd3f2", "text": "Increased neuromuscular excitability with varying clinical and EMG features were also observed during KCl administration in both cases. The findings are discussed on the light of the membrane ionic gradients current theory.", "title": "" }, { "docid": "eaeccd0d398e0985e293d680d2265528", "text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.", "title": "" }, { "docid": "10e41955aea6710f198744ac1f201d64", "text": "Current research on culture focuses on independence and interdependence and documents numerous East-West psychological differences, with an increasing emphasis placed on cognitive mediating mechanisms. Lost in this literature is a time-honored idea of culture as a collective process composed of cross-generationally transmitted values and associated behavioral patterns (i.e., practices). A new model of neuro-culture interaction proposed here addresses this conceptual gap by hypothesizing that the brain serves as a crucial site that accumulates effects of cultural experience, insofar as neural connectivity is likely modified through sustained engagement in cultural practices. Thus, culture is \"embrained,\" and moreover, this process requires no cognitive mediation. The model is supported in a review of empirical evidence regarding (a) collective-level factors involved in both production and adoption of cultural values and practices and (b) neural changes that result from engagement in cultural practices. Future directions of research on culture, mind, and the brain are discussed.", "title": "" }, { "docid": "d5b986cf02b3f9b01e5307467c1faec2", "text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.", "title": "" }, { "docid": "d39843f342646e4d338ab92bb7391d76", "text": "In this paper, a double-axis planar micro-fluxgate magnetic sensor and its front-end circuitry are presented. The ferromagnetic core material, i.e., the Vitrovac 6025 X, has been deposited on top of the coils with the dc-magnetron sputtering technique, which is a new type of procedure with respect to the existing solutions in the field of fluxgate sensors. This procedure allows us to obtain a core with the good magnetic properties of an amorphous ferromagnetic material, which is typical of a core with 25-mum thickness, but with a thickness of only 1 mum, which is typical of an electrodeposited core. The micro-Fluxgate has been realized in a 0.5- mum CMOS process using copper metal lines to realize the excitation coil and aluminum metal lines for the sensing coil, whereas the integrated interface circuitry for exciting and reading out the sensor has been realized in a 0.35-mum CMOS technology. Applying a triangular excitation current of 18 mA peak at 100 kHz, the magnetic sensitivity achieved is about 10 LSB/muT [using a 13-bit analog-to-digital converter (ADC)], which is suitable for detecting the Earth's magnetic field (plusmn60 muT), whereas the linearity error is 3% of the full scale. The maximum angle error of the sensor evaluating the Earth magnetic field is 2deg. The power consumption of the sensor is about 13.7 mW. The total power consumption of the system is about 90 mW.", "title": "" }, { "docid": "7d0d68f2dd9e09540cb2ba71646c21d2", "text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.", "title": "" }, { "docid": "c7d23af5ad79d9863e83617cf8bbd1eb", "text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.", "title": "" }, { "docid": "bb8b6d2424ef7709aa1b89bc5d119686", "text": "We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors. In a held-out test set, our Long Short-Term Memory model gives a mean absolute percentage error of 24.2%, outperforming linear Ridge/Lasso and autoregressive GARCH benchmarks by at least 31%. This evaluation is based on an optimal observation and normalization scheme which maximizes the mutual information between domestic trends and daily volatility in the training set. Our preliminary investigation shows strong promise for better predicting stock behavior via deep learning and neural network models.", "title": "" }, { "docid": "8e8dcbc4eacf7484a44b4b6647fcfdb2", "text": "BACKGROUND\nWith the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics.\n\n\nDESCRIPTION\nThis paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications.\n\n\nCONCLUSION\nTopic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.", "title": "" }, { "docid": "b5b6fc6ce7690ae8e49e1951b08172ce", "text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.", "title": "" }, { "docid": "77985effa998d08e75eaa117e07fc7a9", "text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.", "title": "" }, { "docid": "2269c84a2725605242790cf493425e0c", "text": "Tissue engineering aims to improve the function of diseased or damaged organs by creating biological substitutes. To fabricate a functional tissue, the engineered construct should mimic the physiological environment including its structural, topographical, and mechanical properties. Moreover, the construct should facilitate nutrients and oxygen diffusion as well as removal of metabolic waste during tissue regeneration. In the last decade, fiber-based techniques such as weaving, knitting, braiding, as well as electrospinning, and direct writing have emerged as promising platforms for making 3D tissue constructs that can address the abovementioned challenges. Here, we critically review the techniques used to form cell-free and cell-laden fibers and to assemble them into scaffolds. We compare their mechanical properties, morphological features and biological activity. We discuss current challenges and future opportunities of fiber-based tissue engineering (FBTE) for use in research and clinical practice.", "title": "" }, { "docid": "93f2fb12d61f3acb2eb31f9a2335b9c3", "text": "Cluster identification in large scale information network is a highly attractive issue in the network knowledge mining. Traditionally, community detection algorithms are designed to cluster object population based on minimizing the cutting edge number. Recently, researchers proposed the concept of higher-order clustering framework to segment network objects under the higher-order connectivity patterns. However, the essences of the numerous methodologies are focusing on mining the homogeneous networks to identify groups of objects which are closely related to each other, indicating that they ignore the heterogeneity of different types of objects and links in the networks. In this study, we propose an integrated framework of heterogeneous information network structure and higher-order clustering for mining the hidden relationship, which include three major steps: (1) Construct the heterogeneous network, (2) Convert HIN to Homogeneous network, and (3) Community detection.", "title": "" }, { "docid": "226d474f5d0278f81bcaf7203706486b", "text": "Human pose estimation is a well-known computer vision problem that receives intensive research interest. The reason for such interest is the wide range of applications that the successful estimation of human pose offers. Articulated pose estimation includes real time acquisition, analysis, processing and understanding of high dimensional visual information. Ensemble learning methods operating on hand-engineered features have been commonly used for addressing this task. Deep learning exploits representation learning methods to learn multiple levels of representations from raw input data, alleviating the need to hand-crafted features. Deep convolutional neural networks are achieving the state-of-the-art in visual object recognition, localization, detection. In this paper, the pose estimation task is formulated as an offset joint regression problem. The 3D joints positions are accurately detected from a single raw depth image using a deep convolutional neural networks model. The presented method relies on the utilization of the state-of-the-art data generation pipeline to generate large, realistic, and highly varied synthetic set of training images. Analysis and experimental results demonstrate the generalization performance and the real time successful application of the proposed method.", "title": "" }, { "docid": "49d5f6fdc02c777d42830bac36f6e7e2", "text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.", "title": "" }, { "docid": "b261534c045299c1c3a0e0cc37caa618", "text": "Michelangelo (1475-1564) had a life-long interest in anatomy that began with his participation in public dissections in his early teens, when he joined the court of Lorenzo de' Medici and was exposed to its physician-philosopher members. By the age of 18, he began to perform his own dissections. His early anatomic interests were revived later in life when he aspired to publish a book on anatomy for artists and to collaborate in the illustration of a medical anatomy text that was being prepared by the Paduan anatomist Realdo Colombo (1516-1559). His relationship with Colombo likely began when Colombo diagnosed and treated him for nephrolithiasis in 1549. He seems to have developed gouty arthritis in 1555, making the possibility of uric acid stones a distinct probability. Recurrent urinary stones until the end of his life are well documented in his correspondence, and available documents imply that he may have suffered from nephrolithiasis earlier in life. His terminal illness with symptoms of fluid overload suggests that he may have sustained obstructive nephropathy. That this may account for his interest in kidney function is evident in his poetry and drawings. Most impressive in this regard is the mantle of the Creator in his painting of the Separation of Land and Water in the Sistine Ceiling, which is in the shape of a bisected right kidney. His use of the renal outline in a scene representing the separation of solids (Land) from liquid (Water) suggests that Michelangelo was likely familiar with the anatomy and function of the kidney as it was understood at the time.", "title": "" } ]
scidocsrr
ef23a68c8a9a134db33900965e814f8d
Analysis of Permission-based Security in Android through Policy Expert, Developer, and End User Perspectives
[ { "docid": "948d3835e90c530c4290e18f541d5ef2", "text": "Each time a user installs an application on their Android phone they are presented with a full screen of information describing what access they will be granting that application. This information is intended to help them make two choices: whether or not they trust that the application will not damage the security of their device and whether or not they are willing to share their information with the application, developer, and partners in question. We performed a series of semi-structured interviews in two cities to determine whether people read and understand these permissions screens, and to better understand how people perceive the implications of these decisions. We find that the permissions displays are generally viewed and read, but not understood by Android users. Alarmingly, we find that people are unaware of the security risks associated with mobile apps and believe that app marketplaces test and reject applications. In sum, users are not currently well prepared to make informed privacy and security decisions around installing applications.", "title": "" } ]
[ { "docid": "8e10d20723be23d699c0c581c529ee19", "text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.", "title": "" }, { "docid": "12b855b39278c49d448fbda9aa56cacf", "text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.", "title": "" }, { "docid": "6a9738cbe28b53b3a9ef179091f05a4a", "text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.", "title": "" }, { "docid": "2c9e17d4c5bfb803ea1ff20ea85fbd10", "text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.", "title": "" }, { "docid": "755f7d663e813d7450089fc0d7058037", "text": "This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.", "title": "" }, { "docid": "de39f498f28cf8cfc01f851ca3582d32", "text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.", "title": "" }, { "docid": "96fd2fdc0ea4fbde407ac1f56452ca24", "text": "EEG signals, measuring transient brain activities, can be used as a source of biometric information with potential application in high-security person recognition scenarios. However, due to the inherent nature of these signals and the process used for their acquisition, their effective preprocessing is critical for their successful utilisation. In this paper we compare the effectiveness of different wavelet-based noise removal methods and propose an EEG-based biometric identification system which combines two such de-noising methods to enhance the signal preprocessing stage. In tests using 50 subjects from a public database, the proposed new approach is shown to provide improved identification performance over alternative techniques. Another important preprocessing consideration is the segmentation of the EEG record prior to de-noising. Different segmentation approaches were investigated and the trade-off between performance and computation time is explored. Finally the paper reports on the impact of the choice of wavelet function used for feature extraction on system performance.", "title": "" }, { "docid": "4e86e02be77fe4e10c199efa1e9456c4", "text": "This paper presents EsdRank, a new technique for improving ranking using external semi-structured data such as controlled vocabularies and knowledge bases. EsdRank treats vocabularies, terms and entities from external data, as objects connecting query and documents. Evidence used to link query to objects, and to rank documents are incorporated as features between query-object and object-document correspondingly. A latent listwise learning to rank algorithm, Latent-ListMLE, models the objects as latent space between query and documents, and learns how to handle all evidence in a unified procedure from document relevance judgments. EsdRank is tested in two scenarios: Using a knowledge base for web search, and using a controlled vocabulary for medical search. Experiments on TREC Web Track and OHSUMED data show significant improvements over state-of-the-art baselines.", "title": "" }, { "docid": "5d05addd1cac2ea4ca5008950a21bd06", "text": "We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.", "title": "" }, { "docid": "935c404529b02cee2620e52f7a09b84d", "text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.", "title": "" }, { "docid": "c01dd2ae90781291cb5915957bd42ae1", "text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.", "title": "" }, { "docid": "4a70c88a031195a5593aaa403b9681cd", "text": "In this paper, we are interested in two seemingly different concepts: adversarial training and generative adversarial networks (GANs). Particularly, how these techniques help to improve each other. To this end, we analyze the limitation of adversarial training as the defense method, starting from questioning how well the robustness of a model can generalize. Then, we successfully improve the generalizability via data augmentation by the “fake” images sampled from generative adversarial network. After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free. We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work. Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker in a single network. After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively. In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry etla., 2017), while our generator achieves competitive performance compared with SN-GAN (Miyato and Koyama, 2018). Source code is publicly available online at https://github.com/anonymous.", "title": "" }, { "docid": "813106ce10d23483ef8afa56857277a2", "text": "Reinforced concrete walls are commonly used as the primary lateral force-resisting system for tall buildings. As the tools for conducting nonlinear response history analysis have improved and with the advent of performancebased seismic design, reinforced concrete walls and core walls are often employed as the only lateral forceresisting system. Proper modelling of the load versus deformation behaviour of reinforced concrete walls and link beams is essential to accurately predict important response quantities. Given this critical need, an overview of modelling approaches appropriate to capture the lateral load responses of both slender and stout reinforced concrete walls, as well as link beams, is presented. Modelling of both fl exural and shear responses is addressed, as well as the potential impact of coupled fl exure–shear behaviour. Model results are compared with experimental results to assess the ability of common modelling approaches to accurately predict both global and local experimental responses. Based on the fi ndings, specifi c recommendations are made for general modelling issues, limiting material strains for combined bending and axial load, and shear backbone relations. Copyright © 2007 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "2dee247b24afc7ddba44b312c0832bc1", "text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.", "title": "" }, { "docid": "49517920ddecf10a384dc3e98e39459b", "text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.", "title": "" }, { "docid": "cc2579bb621338908cacc7808cb1f851", "text": "This paper presents a comprehensive analysis and comparison of air-cored axial-flux permanent-magnet machines with different types of coil configurations. Although coil factor is particularly more sensitive to coil-band width and coil pitch in air-cored machines than conventional slotted machines, remarkably no comprehensive analytical equations exist. Here, new formulas are derived to compare the coil factor of two common concentrated-coil stator winding types. Then, respective coil factors for the winding types are used to determine the torque characteristics and, from that, the optimized coil configurations. Three-dimensional finite-element analysis (FEA) models are built to verify the analytical models. Furthermore, overlapping and wave windings are investigated and compared with the concentrated-coil types. Finally, a prototype machine is designed and built for experimental validations. The results show that the concentrated-coil type with constant coil pitch is superior to all other coil types under study.", "title": "" }, { "docid": "3902afc560de6f0b028315977bc55976", "text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.", "title": "" }, { "docid": "6831c633bf7359b8d22296b52a9a60b8", "text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.", "title": "" }, { "docid": "64c06bffe4aeff54fbae9d87370e552c", "text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.", "title": "" }, { "docid": "a64ae2e6e72b9e38c700ddd62b4f6bf3", "text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.", "title": "" } ]
scidocsrr
4b78e80f2a680dcde17697d86ec3ba2e
Robust discrete optimization and network flows
[ { "docid": "d53726710ce73fbcf903a1537f149419", "text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.", "title": "" }, { "docid": "4a3f7e89874c76f62aa97ef6a114d574", "text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.", "title": "" } ]
[ { "docid": "a33ed384b8f4a86e8cc82970c7074bad", "text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.", "title": "" }, { "docid": "43e39433013ca845703af053e5ef9e11", "text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.", "title": "" }, { "docid": "c26ff98ac6cc027b07fec213a192a446", "text": "Basic to all motile life is a differential approach/avoid response to perceived features of environment. The stages of response are initial reflexive noticing and orienting to the stimulus, preparation, and execution of response. Preparation involves a coordination of many aspects of the organism: muscle tone, posture, breathing, autonomic functions, motivational/emotional state, attentional orientation, and expectations. The organism organizes itself in relation to the challenge. We propose to call this the \"preparatory set\" (PS). We suggest that the concept of the PS can offer a more nuanced and flexible perspective on the stress response than do current theories. We also hypothesize that the mechanisms of body-mind therapeutic and educational systems (BTES) can be understood through the PS framework. We suggest that the BTES, including meditative movement, meditation, somatic education, and the body-oriented psychotherapies, are approaches that use interventions on the PS to remedy stress and trauma. We discuss how the PS can be adaptive or maladaptive, how BTES interventions may restore adaptive PS, and how these concepts offer a broader and more flexible view of the phenomena of stress and trauma. We offer supportive evidence for our hypotheses, and suggest directions for future research. We believe that the PS framework will point to ways of improving the management of stress and trauma, and that it will suggest directions of research into the mechanisms of action of BTES.", "title": "" }, { "docid": "1d348cc6b7a98dc5b61c66af9e94153c", "text": "With the rapid development of web technology, an increasing number of enterprises having been seeking for a method to facilitate business decision making process, power the bottom line, and achieve a fully coordinated organization, called business intelligence (BI). Unfortunately, traditional BI tends to be unmanageable, risky and prohibitively expensive, especially for Small and Medium Enterprises (SMEs). The emergence of cloud computing and Software as a Service (SaaS) provides a cost effective solution. Recently, business intelligence applications delivered via SaaS, termed as Business Intelligence as a Service (SaaS BI), has proved to be the next generation in BI market. However, since SaaS BI just comes in its infant stage, a general framework maybe poorly considered. Therefore, in this paper we proposed a general conceptual framework for SaaS BI, and presented several possible future directions of SaaS BI.", "title": "" }, { "docid": "89b92204d76120bef660eb55303752d2", "text": "In this paper, we present a design of RTE template structure for AUTOSAR-based vehicle applications. Due to an increase in software complexity in recent years, much greater efforts are necessary to manage and develop software modules in automotive industries. To deal with this issue, an automotive Open system architecture (AUTOSAR) partnership was launched. This embedded platform standardizes software architectures and provides a methodology supporting distributed process. The RTE that is located at the heart of AUTOSAR implements the virtual function bus functionality for a particle electronic control unit (ECU). It enables to communicate between application software components. The purpose of this paper is to design a RTE structure with the AUTOSAR standard concept. As a future work, this research will be further extended, and we will propose the development of a RTE generator that is drawn on an AUTOSAR RTE template structure.", "title": "" }, { "docid": "51f2ba8b460be1c9902fb265b2632232", "text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.", "title": "" }, { "docid": "0f56b99bc1d2c9452786c05242c89150", "text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.", "title": "" }, { "docid": "3e24de04f0b1892b27fc60bb8a405d0d", "text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.", "title": "" }, { "docid": "3eb419ef59ad59e60bf357cfb2e69fba", "text": "Heterogeneous information network (HIN) has been widely adopted in recommender systems due to its excellence in modeling complex context information. Although existing HIN based recommendation methods have achieved performance improvement to some extent, they have two major shortcomings. First, these models seldom learn an explicit representation for path or meta-path in the recommendation task. Second, they do not consider the mutual effect between the meta-path and the involved user-item pair in an interaction. To address these issues, we develop a novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation. We elaborately design a three-way neural interaction model by explicitly incorporating meta-path based context. To construct the meta-path based context, we propose to use a priority based sampling technique to select high-quality path instances. Our model is able to learn effective representations for users, items and meta-path based context for implementing a powerful interaction function. The co-attention mechanism improves the representations for meta-path based con- text, users and items in a mutual enhancement way. Extensive experiments on three real-world datasets have demonstrated the effectiveness of the proposed model. In particular, the proposed model performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.", "title": "" }, { "docid": "a219afda822413bbed34a21145807b47", "text": "In this work, the author implemented a NOVEL technique of multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) based on space frequency block coding (SF-BC). Where, the implemented code is designed based on the QOC using the techniques of the reconfigurable antennas. The proposed system is implemented using MATLAB program, and the results showing best performance of a wireless communications system of higher coding gain and diversity.", "title": "" }, { "docid": "c67fbc6e0a2a66e0855dcfc7a70cfb86", "text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.", "title": "" }, { "docid": "5eeb17964742e1bf1e517afcb1963b02", "text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.", "title": "" }, { "docid": "3980da6e0c81bf029bbada09d7ea59e3", "text": "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10%, compared to that obtained using the perfect F-CSI.", "title": "" }, { "docid": "122bc83bcd27b95092c64cf1ad8ee6a8", "text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.", "title": "" }, { "docid": "6f8a2292749aaae6add667d153e3abbd", "text": "Despite a flurry of activities aimed at serving customers better, few companies have systematically revamped their operations with customer loyalty in mind. Instead, most have adopted improvement programs ad hoc, and paybacks haven't materialized. Building a highly loyal customer base must be integral to a company's basic business strategy. Loyalty leaders like MBNA credit cards are successful because they have designed their entire business systems around customer loyalty--a self-reinforcing system in which the company delivers superior value consistently and reinvents cash flows to find and keep high-quality customers and employees. The economic benefits of high customer loyalty are measurable. When a company consistently delivers superior value and wins customer loyalty, market share and revenues go up, and the cost of acquiring new customers goes down. The better economics mean the company can pay workers better, which sets off a whole chain of events. Increased pay boosts employee moral and commitment; as employees stay longer, their productivity goes up and training costs fall; employees' overall job satisfaction, combined with their experience, helps them serve customers better; and customers are then more inclined to stay loyal to the company. Finally, as the best customers and employees become part of the loyalty-based system, competitors are left to survive with less desirable customers and less talented employees. To compete on loyalty, a company must understand the relationships between customer retention and the other parts of the business--and be able to quantify the linkages between loyalty and profits. It involves rethinking and aligning four important aspects of the business: customers, product/service offering, employees, and measurement systems.", "title": "" }, { "docid": "95db9ce9faaf13e8ff8d5888a6737683", "text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:", "title": "" }, { "docid": "70fafdedd05a40db5af1eabdf07d431c", "text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.", "title": "" }, { "docid": "f141bd66dc2a842c21f905e3e01fa93c", "text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature", "title": "" }, { "docid": "0774820345f37dd1ae474fc4da1a3a86", "text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.", "title": "" }, { "docid": "1c9c30e3e007c2d11c6f5ebd0092050b", "text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.", "title": "" } ]
scidocsrr
0362bc28f18510c434c802902068035d
An ontology approach to object-based image retrieval
[ { "docid": "5e0cff7f2b8e5aa8d112eacf2f149d60", "text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.", "title": "" }, { "docid": "0084d9c69d79a971e7139ab9720dd846", "text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.", "title": "" } ]
[ { "docid": "e751fdbc980c36b95c81f0f865bb5033", "text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.", "title": "" }, { "docid": "aea0aeea95d251b5a7102825ad5c66ce", "text": "The life time extension in the wireless sensor network (WSN) is the major concern in real time application, if the battery attached with the sensor node life is not optimized properly then the network life fall short. A protocol using a new evolutionary technique, cat swarm optimization (CSO), is designed and implemented in real time to minimize the intra-cluster distances between the cluster members and their cluster heads and optimize the energy distribution for the WSNs. We analyzed the performance of WSN protocol with the help of sensor nodes deployed in a field and grouped in to clusters. The novelty in our proposed scheme is considering the received signal strength, residual battery voltage and intra cluster distance of sensor nodes in cluster head selection with the help of CSO. The result is compared with the well-known protocol Low-energy adaptive clustering hierarchy-centralized (LEACH-C) and the swarm based optimization technique Particle swarm optimization (PSO). It was found that the battery energy level has been increased considerably of the traditional LEACH and PSO algorithm.", "title": "" }, { "docid": "21e47bd70185299e94f8553ca7e60a6e", "text": "Processes causing greenhouse gas (GHG) emissions benefit humans by providing consumer goods and services. This benefit, and hence the responsibility for emissions, varies by purpose or consumption category and is unevenly distributed across and within countries. We quantify greenhouse gas emissions associated with the final consumption of goods and services for 73 nations and 14 aggregate world regions. We analyze the contribution of 8 categories: construction, shelter, food, clothing, mobility, manufactured products, services, and trade. National average per capita footprints vary from 1 tCO2e/y in African countries to approximately 30/y in Luxembourg and the United States. The expenditure elasticity is 0.57. The cross-national expenditure elasticity for just CO2, 0.81, corresponds remarkably well to the cross-sectional elasticities found within nations, suggesting a global relationship between expenditure and emissions that holds across several orders of magnitude difference. On the global level, 72% of greenhouse gas emissions are related to household consumption, 10% to government consumption, and 18% to investments. Food accounts for 20% of GHG emissions, operation and maintenance of residences is 19%, and mobility is 17%. Food and services are more important in developing countries, while mobility and manufactured goods rise fast with income and dominate in rich countries. The importance of public services and manufactured goods has not yet been sufficiently appreciated in policy. Policy priorities hence depend on development status and country-level characteristics.", "title": "" }, { "docid": "6fb1f05713db4e771d9c610fa9c9925d", "text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.", "title": "" }, { "docid": "5a85c72c5b9898b010f047ee99dba133", "text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.", "title": "" }, { "docid": "917154ffa5d9108fd07782d1c9a183ba", "text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.", "title": "" }, { "docid": "70fd930a2a6504404bec67779cba71b2", "text": "This article discusses the logical implementation of the media access control and the physical layer of 100 Gb/s Ethernet. The target are a MAC/PCS LSI, supporting MAC and physical coding sublayer, and a gearbox LSI, providing 10:4 parallel lane-width exchange inside an optical module. The two LSIs are connected by a 100 gigabit attachment unit interface, which consists of ten 10 Gb/s lines. We realized a MAC/PCS logical circuit with a low-frequency clock on a FPGA, whose size is 250 kilo LUTs with a 5.7 Mbit RAM, and the power consumption of the gearbox LSI estimated to become 2.3 W.", "title": "" }, { "docid": "6ea91574db57616682cf2a9608b0ac0b", "text": "METHODOLOGY AND PRINCIPAL FINDINGS\nOleuropein promoted cultured human follicle dermal papilla cell proliferation and induced LEF1 and Cyc-D1 mRNA expression and β-catenin protein expression in dermal papilla cells. Nuclear accumulation of β-catenin in dermal papilla cells was observed after oleuropein treatment. Topical application of oleuropein (0.4 mg/mouse/day) to C57BL/6N mice accelerated the hair-growth induction and increased the size of hair follicles in telogenic mouse skin. The oleuropein-treated mouse skin showed substantial upregulation of Wnt10b, FZDR1, LRP5, LEF1, Cyc-D1, IGF-1, KGF, HGF, and VEGF mRNA expression and β-catenin protein expression.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThese results demonstrate that topical oleuroepin administration induced anagenic hair growth in telogenic C57BL/6N mouse skin. The hair-growth promoting effect of oleuropein in mice appeared to be associated with the stimulation of the Wnt10b/β-catenin signaling pathway and the upregulation of IGF-1, KGF, HGF, and VEGF gene expression in mouse skin tissue.", "title": "" }, { "docid": "94cf1976c10d632cfce12ce3f32be4cc", "text": "In today’s economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, investigative support, provider lock-in and disaster recovery. We focus on risk and control analysis in relation to a sample of Swiss companies with regard to their prospective adoption of public cloud services. We observe a sufficient degree of risk awareness with a focus on those risks that are relevant to the IT function to be migrated to the cloud. Moreover, the recommendations as to the adoption of cloud services depend on the company’s size with larger and more technologically advanced companies being better prepared for the cloud. As an exploratory first step, the results of this study would allow us to design and implement broader research into cloud computing risk management in Switzerland.", "title": "" }, { "docid": "db1abd38db0295fc573bdfca2c2b19a3", "text": "BACKGROUND\nBacterial vaginosis (BV) has been most consistently linked to sexual behaviour, and the epidemiological profile of BV mirrors that of established sexually transmitted infections (STIs). It remains a matter of debate however whether BV pathogenesis does actually involve sexual transmission of pathogenic micro-organisms from men to women. We therefore made a critical appraisal of the literature on BV in relation to sexual behaviour.\n\n\nDISCUSSION\nG. vaginalis carriage and BV occurs rarely with children, but has been observed among adolescent, even sexually non-experienced girls, contradicting that sexual transmission is a necessary prerequisite to disease acquisition. G. vaginalis carriage is enhanced by penetrative sexual contact but also by non-penetrative digito-genital contact and oral sex, again indicating that sex per se, but not necessarily coital transmission is involved. Several observations also point at female-to-male rather than at male-to-female transmission of G. vaginalis, presumably explaining the high concordance rates of G. vaginalis carriage among couples. Male antibiotic treatment has not been found to protect against BV, condom use is slightly protective, whereas male circumcision might protect against BV. BV is also common among women-who-have-sex-with-women and this relates at least in part to non-coital sexual behaviours. Though male-to-female transmission cannot be ruled out, overall there is little evidence that BV acts as an STD. Rather, we suggest BV may be considered a sexually enhanced disease (SED), with frequency of intercourse being a critical factor. This may relate to two distinct pathogenetic mechanisms: (1) in case of unprotected intercourse alkalinisation of the vaginal niche enhances a shift from lactobacilli-dominated microflora to a BV-like type of microflora and (2) in case of unprotected and protected intercourse mechanical transfer of perineal enteric bacteria is enhanced by coitus. A similar mechanism of mechanical transfer may explain the consistent link between non-coital sexual acts and BV. Similar observations supporting the SED pathogenetic model have been made for vaginal candidiasis and for urinary tract infection.\n\n\nSUMMARY\nThough male-to-female transmission cannot be ruled out, overall there is incomplete evidence that BV acts as an STI. We believe however that BV may be considered a sexually enhanced disease, with frequency of intercourse being a critical factor.", "title": "" }, { "docid": "5528f1ee010e7fba440f1f7ff84a3e8e", "text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …", "title": "" }, { "docid": "d10afc83c234c1c0531e23b29b5d8895", "text": "BACKGROUND\nThe efficacy of new antihypertensive drugs has been questioned. We compared the effects of conventional and newer antihypertensive drugs on cardiovascular mortality and morbidity in elderly patients.\n\n\nMETHODS\nWe did a prospective, randomised trial in 6614 patients aged 70-84 years with hypertension (blood pressure > or = 180 mm Hg systolic, > or = 105 mm Hg diastolic, or both). Patients were randomly assigned conventional antihypertensive drugs (atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or hydrochlorothiazide 25 mg plus amiloride 2.5 mg daily) or newer drugs (enalapril 10 mg or lisinopril 10 mg, or felodipine 2.5 mg or isradipine 2-5 mg daily). We assessed fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease. Analysis was by intention to treat.\n\n\nFINDINGS\nBlood pressure was decreased similarly in all treatment groups. The primary combined endpoint of fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease occurred in 221 of 2213 patients in the conventional drugs group (19.8 events per 1000 patient-years) and in 438 of 4401 in the newer drugs group (19.8 per 1000; relative risk 0.99 [95% CI 0.84-1.16], p=0.89). The combined endpoint of fatal and non-fatal stroke, fatal and non-fatal myocardial infarction, and other cardiovascular mortality occurred in 460 patients taking conventional drugs and in 887 taking newer drugs (0.96 [0.86-1.08], p=0.49).\n\n\nINTERPRETATION\nOld and new antihypertensive drugs were similar in prevention of cardiovascular mortality or major events. Decrease in blood pressure was of major importance for the prevention of cardiovascular events.", "title": "" }, { "docid": "ee1e2400ed5c944826747a8e616b18c1", "text": "Metastasis remains the greatest challenge in the clinical management of cancer. Cell motility is a fundamental and ancient cellular behaviour that contributes to metastasis and is conserved in simple organisms. In this Review, we evaluate insights relevant to human cancer that are derived from the study of cell motility in non-mammalian model organisms. Dictyostelium discoideum, Caenorhabditis elegans, Drosophila melanogaster and Danio rerio permit direct observation of cells moving in complex native environments and lend themselves to large-scale genetic and pharmacological screening. We highlight insights derived from each of these organisms, including the detailed signalling network that governs chemotaxis towards chemokines; a novel mechanism of basement membrane invasion; the positive role of E-cadherin in collective direction-sensing; the identification and optimization of kinase inhibitors for metastatic thyroid cancer on the basis of work in flies; and the value of zebrafish for live imaging, especially of vascular remodelling and interactions between tumour cells and host tissues. While the motility of tumour cells and certain host cells promotes metastatic spread, the motility of tumour-reactive T cells likely increases their antitumour effects. Therefore, it is important to elucidate the mechanisms underlying all types of cell motility, with the ultimate goal of identifying combination therapies that will increase the motility of beneficial cells and block the spread of harmful cells.", "title": "" }, { "docid": "ec673efa5f837ba4c997ee7ccd845ce1", "text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.", "title": "" }, { "docid": "a27a05cb00d350f9021b5c4f609d772c", "text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.", "title": "" }, { "docid": "64b3bcb65b9e890810561ab5700dec32", "text": "In this paper, we address the fusion problem of two estimates, where the cross-correlation between the estimates is unknown. To solve the problem within the Bayesian framework, we assume that the covariance matrix has a prior distribution. We also assume that we know the covariance of each estimate, i.e., the diagonal block of the entire co-variance matrix (of the random vector consisting of the two estimates). We then derive the conditional distribution of the off-diagonal blocks, which is the cross-correlation of our interest. The conditional distribution happens to be the inverted matrix variate t-distribution. We can readily sample from this distribution and use a Monte Carlo method to compute the minimum mean square error estimate for the fusion problem. Simulations show that the proposed method works better than the popular covariance intersection method.", "title": "" }, { "docid": "b47980c393116ac598a7f5c38fb402b9", "text": "Skin cancer, the most common human malignancy, is primarily diagnosed visually by physicians . Classification with an automated method like CNN [2, 3] shows potential for diagnosing the skin cancer according to the medical photographs. By now, the deep convolutional neural networks can achieve the level of human dermatologist . This work is dedicated on developing a Deep Learning method for ISIC [5] 2017 Skin Lesion Detection Competition to classify the dermatology pictures, which is aiming at improving the diagnostic accuracy rate. As an result, it will improve the general level of the human health. The challenge falls into three sub-challenges, including Lesion Segmentation, Lesion Dermoscopic Feature Extraction and Lesion Classification. We focus on the Lesion Classification task. The proposed algorithm is comprised of three steps: (1) original images preprocessing, (2) modelling the processed images using CNN [2, 3] in Caffe [4] framework, (3) predicting the test images and calculating the scores that represent the likelihood of corresponding classification. The models are built on the source images are using the Caffe [4] framework. The scores in prediction step are obtained by two different models from the source images.", "title": "" }, { "docid": "d7a2708fc70f6480d9026aeefce46610", "text": "In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.", "title": "" }, { "docid": "ed9d72566cdf3e353bf4b1e589bf85eb", "text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.", "title": "" } ]
scidocsrr
75e47e330359d1afc684d4cd17beae29
Depth camera tracking with contour cues
[ { "docid": "1782fc75827937c6b31951bfca997f48", "text": "Registering 2 or more range scans is a fundamental problem, with application to 3D modeling. While this problem is well addressed by existing techniques such as ICP when the views overlap significantly at a good initialization, no satisfactory solution exists for wide baseline registration. We propose here a novel approach which leverages contour coherence and allows us to align two wide baseline range scans with limited overlap from a poor initialization. Inspired by ICP, we maximize the contour coherence by building robust corresponding pairs on apparent contours and minimizing their distances in an iterative fashion. We use the contour coherence under a multi-view rigid registration framework, and this enables the reconstruction of accurate and complete 3D models from as few as 4 frames. We further extend it to handle articulations, and this allows us to model articulated objects such as human body. Experimental results on both synthetic and real data demonstrate the effectiveness and robustness of our contour coherence based registration approach to wide baseline range scans, and to 3D modeling.", "title": "" }, { "docid": "c64d5309c8f1e2254144215377b366b1", "text": "Since the initial comparison of Seitz et al. [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al. [59], showing the results to compare more than favorably with the current state-of-the-art methods.", "title": "" }, { "docid": "5dac8ef81c7a6c508c603b3fd6a87581", "text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.", "title": "" } ]
[ { "docid": "6bce7698f908721da38a3c6e6916a30e", "text": "For learning in big datasets, the classification performance of ELM might be low due to input samples are not extracted features properly. To address this problem, the hierarchical extreme learning machine (H-ELM) framework was proposed based on the hierarchical learning architecture of multilayer perceptron. H-ELM composes of two parts; the first is the unsupervised multilayer encoding part and the second part is the supervised feature classification part. H-ELM can give higher accuracy rate than of the traditional ELM. However, it still has to enhance its classification performance. Therefore, this paper proposes a new method namely as the extending hierarchical extreme learning machine (EH-ELM). For the extended supervisor part of EH-ELM, we have got an idea from the two-layers extreme learning machine. To evaluate the performance of EH-ELM, three different image datasets; Semeion, MNIST, and NORB, were studied. The experimental results show that EH-ELM achieves better performance than of H-ELM and the other multi-layer framework.", "title": "" }, { "docid": "dfc51ea36992f8afccfbf625e3016054", "text": "Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.", "title": "" }, { "docid": "3f95493016925d4f4a8a0d0a1bc8dc9d", "text": "A consequent pole, dual rotor, axial flux vernier permanent magnet (VPM) machine is developed to reduce magnet usage and increase torque density. Its end winding length is much shorter than that of regular VPM machines due to its toroidal winding configuration. The configurations and features of the proposed machine are discussed. Benefited from its vernier and consequent pole structure, this new machine exhibits much higher back-EMF and torque density than that of a regular dual rotor axial flux machine, while the magnet usage is halved. The influence of main design parameters, such as slot opening, ratio of inner to outer stator diameter, magnet thickness etc., on torque performance is analyzed based on the quasi-3-dimensional (quasi-3D) finite element analysis (FEA). The analyzing results are validated by real 3D FEA.", "title": "" }, { "docid": "43baeb87f1798d52399ba8c78ffa7fef", "text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-", "title": "" }, { "docid": "e456ab6399ad84b575737d2a91597fdc", "text": "In the last two decades, number of Higher Education Institutions (HEI) grows rapidly in India. Since most of the institutions are opened in private mode therefore, a cut throat competition rises among these institutions while attracting the student to got admission. This is the reason for institutions to focus on the strength of students not on the quality of education. This paper presents a data mining application to generate predictive models for engineering student’s dropout management. Given new records of incoming students, the predictive model can produce short accurate prediction list identifying students who tend to need the support from the student dropout program most. The results show that the machine learning algorithm is able to establish effective predictive model from the existing student dropout data. Keywords– Data Mining, Machine Learning Algorithms, Dropout Management and Predictive Models", "title": "" }, { "docid": "4ee62d81dcdf6e1dc9b06757668e0fc8", "text": "The frequent and protracted use of video games with serious personal, family and social consequences is no longer just a pleasant pastime and could lead to mental and physical health problems. Although there is no official recognition of video game addiction on the Internet as a mild mental health disorder, further scientific research is needed.", "title": "" }, { "docid": "bc018ef7cbcf7fc032fe8556016d08b1", "text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.", "title": "" }, { "docid": "089808010a2925a7eaca71736fbabcaf", "text": "In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of images, the global motion can be described by independent motion models. On the other hand, in a sequence there exist as many as \u000e pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fitting a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (ie. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.", "title": "" }, { "docid": "3327a70849d7331bb1db01d99a3d0000", "text": "Queueing network models have proved to be cost effectwe tools for analyzing modern computer systems. This tutorial paper presents the basic results using the operational approach, a framework which allows the analyst to test whether each assumption is met in a given system. The early sections describe the nature of queueing network models and their apphcations for calculating and predicting performance quantitms The basic performance quantities--such as utilizations, mean queue lengths, and mean response tunes--are defined, and operatmnal relationships among them are derwed Following this, the concept of job flow balance is introduced and used to study asymptotic throughputs and response tunes. The concepts of state transition balance, one-step behavior, and homogeneity are then used to relate the proportions of time that each system state is occupied to the parameters of job demand and to dewce charactenstms Efficmnt methods for computing basic performance quantities are also described. Finally the concept of decomposition is used to stmphfy analyses by replacing subsystems with equivalent devices. All concepts are illustrated liberally with examples", "title": "" }, { "docid": "100c62f22feea14ac54c21408432c371", "text": "Modern approach to the FOREX currency exchange market requires support from the computer algorithms to manage huge volumes of the transactions and to find opportunities in a vast number of currency pairs traded daily. There are many well known techniques used by market participants on both FOREX and stock-exchange markets (i.e. Fundamental and technical analysis) but nowadays AI based techniques seem to play key role in the automated transaction and decision supporting systems. This paper presents the comprehensive analysis over Feed Forward Multilayer Perceptron (ANN) parameters and their impact to accurately forecast FOREX trend of the selected currency pair. The goal of this paper is to provide information on how to construct an ANN with particular respect to its parameters and training method to obtain the best possible forecasting capabilities. The ANN parameters investigated in this paper include: number of hidden layers, number of neurons in hidden layers, use of constant/bias neurons, activation functions, but also reviews the impact of the training methods in the process of the creating reliable and valuable ANN, useful to predict the market trends. The experimental part has been performed on the historical data of the EUR/USD pair.", "title": "" }, { "docid": "49575576bc5a0b949c81b0275cbc5f41", "text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.", "title": "" }, { "docid": "b21135f6c627d7dfd95ad68c9fc9cc48", "text": "New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than \\'18just' a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.", "title": "" }, { "docid": "34c9a0b4f4fdf3d4ef0fbb97e750754b", "text": "Plants are affected by complex genome×environment×management interactions which determine phenotypic plasticity as a result of the variability of genetic components. Whereas great advances have been made in the cost-efficient and high-throughput analyses of genetic information and non-invasive phenotyping, the large-scale analyses of the underlying physiological mechanisms lag behind. The external phenotype is determined by the sum of the complex interactions of metabolic pathways and intracellular regulatory networks that is reflected in an internal, physiological, and biochemical phenotype. These various scales of dynamic physiological responses need to be considered, and genotyping and external phenotyping should be linked to the physiology at the cellular and tissue level. A high-dimensional physiological phenotyping across scales is needed that integrates the precise characterization of the internal phenotype into high-throughput phenotyping of whole plants and canopies. By this means, complex traits can be broken down into individual components of physiological traits. Since the higher resolution of physiological phenotyping by 'wet chemistry' is inherently limited in throughput, high-throughput non-invasive phenotyping needs to be validated and verified across scales to be used as proxy for the underlying processes. Armed with this interdisciplinary and multidimensional phenomics approach, plant physiology, non-invasive phenotyping, and functional genomics will complement each other, ultimately enabling the in silico assessment of responses under defined environments with advanced crop models. This will allow generation of robust physiological predictors also for complex traits to bridge the knowledge gap between genotype and phenotype for applications in breeding, precision farming, and basic research.", "title": "" }, { "docid": "fcca051539729b005271e4f96563538d", "text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.", "title": "" }, { "docid": "26ee1e5770a77d030b6230b8eef7e644", "text": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.", "title": "" }, { "docid": "75233d6d94fec1f43fa02e8043470d4d", "text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.", "title": "" }, { "docid": "3d95e2db34f0b1f999833946a173de3d", "text": "Due to the rapid development of mobile social networks, mobile big data play an important role in providing mobile social users with various mobile services. However, as mobile big data have inherent properties, current MSNs face a challenge to provide mobile social user with a satisfactory quality of experience. Therefore, in this article, we propose a novel framework to deliver mobile big data over content- centric mobile social networks. At first, the characteristics and challenges of mobile big data are studied. Then the content-centric network architecture to deliver mobile big data in MSNs is presented, where each datum consists of interest packets and data packets, respectively. Next, how to select the agent node to forward interest packets and the relay node to transmit data packets are given by defining priorities of interest packets and data packets. Finally, simulation results show the performance of our framework with varied parameters.", "title": "" }, { "docid": "7eed84f959268599e1b724b0752f6aa5", "text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.", "title": "" }, { "docid": "c699fc9a25183e998aa5cdebac1c0a43", "text": "DNN-based cross-modal retrieval is a research hotspot to retrieve across different modalities as image and text, but existing methods often face the challenge of insufficient cross-modal training data. In single-modal scenario, similar problem is usually relieved by transferring knowledge from largescale auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is also very useful for cross-modal retrieval, which can provide rich general semantic information that can be shared across different modalities. However, it is challenging to transfer useful knowledge from single-modal (as image) source domain to cross-modal (as image/text) target domain. Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process. This paper proposes Cross-modal Hybrid Transfer Network (CHTN) with two subnetworks: Modalsharing transfer subnetwork utilizes the modality in both source and target domains as a bridge, for transferring knowledge to both two modalities simultaneously; Layer-sharing correlation subnetwork preserves the inherent cross-modal semantic correlation to further adapt to cross-modal retrieval task. Cross-modal data can be converted to common representation by CHTN for retrieval, and comprehensive experiments on 3 datasets show its effectiveness.", "title": "" }, { "docid": "5ab8a8f4991f7c701c51e32de7f97b36", "text": "Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.", "title": "" } ]
scidocsrr
b175cd9f2ecd8c7706600f101a2e21dd
Recurrent Neural Network Postfilters for Statistical Parametric Speech Synthesis
[ { "docid": "104c9347338f4e725e3c1907a4991977", "text": "This paper derives a speech parameter generation algorithm for HMM-based speech synthesis, in which speech parameter sequence is generated from HMMs whose observation vector consists of spectral parameter vector and its dynamic feature vectors. In the algorithm, we assume that the state sequence (state and mixture sequence for the multi-mixture case) or a part of the state sequence is unobservable (i.e., hidden or latent). As a result, the algorithm iterates the forward-backward algorithm and the parameter generation algorithm for the case where state sequence is given. Experimental results show that by using the algorithm, we can reproduce clear formant structure from multi-mixture HMMs as compared with that produced from single-mixture HMMs.", "title": "" }, { "docid": "d46594f40795de0feef71b480a53553f", "text": "Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.", "title": "" } ]
[ { "docid": "b7944edc9e6704cbf59489f112f46c11", "text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001", "title": "" }, { "docid": "39325a6b06c107fe7d06b958ebcb5f54", "text": "Trunk movements in the frontal and sagittal planes were studied in 10 healthy males (18-35 yrs) during normal walking (1.0-2.5 m/s) and running (2.0-6.0 m/s) on a treadmill. Movements were recorded with a Selspot optoelectronic system. Directions, amplitudes and phase relationships to the stride cycle (defined by the leg movements) were analyzed for both linear and angular displacements. During one stride cycle the trunk displayed two oscillations in the vertical (mean net amplitude 2.5-9.5 cm) and horizontal, forward-backward directions (mean net amplitude 0.5-3 cm) and one oscillation in the lateral, side to side direction (mean net amplitude 2-6 cm). The magnitude and timing of the various oscillations varied in a different way with speed and mode of progression. Differences in amplitudes and timing of the movements at separate levels along the spine gave rise to angular oscillations with a similar periodicity as the linear displacements in both planes studied. The net angular trunk tilting in the frontal plane increased with speed from 3-10 degrees. The net forward-backward trunk inclination showed a small increase with speed up to 5 degrees in fast running. The mean forward inclination of the trunk increased from 6 degrees to about 13 degrees with speed. Peak inclination to one side occurred during the support phase of the leg on the same side. Peak forward inclination was reached at the initiation of the support phase in walking, whereas in running the peak inclination was in the opposite direction at this point. The adaptations of trunk movements to speed and mode of progression could be related to changing mechanical conditions and different demands on equilibrium control due to e.g. changes in support phase duration and leg movements.", "title": "" }, { "docid": "15e440bc952db5b0ad71617e509770b9", "text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.", "title": "" }, { "docid": "d7c3f86e05eb471f7c0952173ae953ae", "text": "Rigid robotic manipulators employ traditional sensors such as encoders or potentiometers to measure joint angles and determine end-effector position. Manipulators that are flexible, however, introduce motions that are much more difficult to measure. This is especially true for continuum manipulators that articulate by means of material compliance. In this paper, we present a vision based system for quantifying the 3-D shape of a flexible manipulator in real-time. The sensor system is validated for accuracy with known point measurements and for precision by estimating a known 3-D shape. We present two applications of the validated system relating to the open-loop control of a tendon driven continuum manipulator. In the first application, we present a new continuum manipulator model and use the sensor to quantify 3-D performance. In the second application, we use the shape sensor system for model parameter estimation in the absence of tendon tension information.", "title": "" }, { "docid": "1d1f93011e83bcefd207c845b2edafcd", "text": "Although single dialyzer use and reuse by chemical reprocessing are both associated with some complications, there is no definitive advantage to either in this respect. Some complications occur mainly at the first use of a dialyzer: a new cellophane or cuprophane membrane may activate the complement system, or a noxious agent may be introduced to the dialyzer during production or generated during storage. These agents may not be completely removed during the routine rinsing procedure. The reuse of dialyzers is associated with environmental contamination, allergic reactions, residual chemical infusion (rebound release), inadequate concentration of disinfectants, and pyrogen reactions. Bleach used during reprocessing causes a progressive increase in dialyzer permeability to larger molecules, including albumin. Reprocessing methods without the use of bleach are associated with progressive decreases in membrane permeability, particularly to larger molecules. Most comparative studies have not shown differences in mortality between centers reusing and those not reusing dialyzers, however, the largest cluster of dialysis-related deaths occurred with single-use dialyzers due to the presence of perfluorohydrocarbon introduced during the manufacturing process and not completely removed during preparation of the dialyzers before the dialysis procedure. The cost savings associated with reuse is substantial, especially with more expensive, high-flux synthetic membrane dialyzers. With reuse, some dialysis centers can afford to utilize more efficient dialyzers that are more expensive; consequently they provide a higher dose of dialysis and reduce mortality. Some studies have shown minimally higher morbidity with chemical reuse, depending on the method. Waste disposal is definitely decreased with the reuse of dialyzers, thus environmental impacts are lessened, particularly if reprocessing is done by heat disinfection. It is safe to predict that dialyzer reuse in dialysis centers will continue because it also saves money for the providers. Saving both time for the patient and money for the provider were the main motivations to design a new machine for daily home hemodialysis. The machine, developed in the 1990s, cleans and heat disinfects the dialyzer and lines in situ so they do not need to be changed for a month. In contrast, reuse of dialyzers in home hemodialysis patients treated with other hemodialysis machines is becoming less popular and is almost extinct.", "title": "" }, { "docid": "f07d44c814bdb87ffffc42ace8fd53a4", "text": "We describe a batch method that uses a sizeable fraction of the training set at each iteration, and that employs secondorder information. • To improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. • This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS. • We show how to perform stable quasi-Newton updating in the multi-batch setting, illustrate the behavior of the algorithm in a distributed computing platform, and study its convergence properties for both the convex and nonconvex cases. Introduction min w∈Rd F (w) = 1 n n ∑ i=1 f (w;x, y) Idea: select a sizeable sample Sk ⊂ {1, . . . , n} at every iteration and perform quasi-Newton steps 1. Distributed computing setting: distributed gradient computation (with faults) 2. Multi-Batch setting: samples are changed at every iteration to accelerate learning Goal: show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization Issue: samples used at the beginning and at the end of every iteration are different • potentially harmful for quasi-Newton methods Key: controlled sampling • consecutive samples overlap Sk ∩ Sk+1 = Ok 6= ∅ • gradient differences based on this overlap – stable quasi-Newton updates Multi-Batch L-BFGS Method At the k-th iteration: • sample Sk ⊂ {1, . . . , n} chosen, and iterates updated via wk+1 = wk − αkHkgk k where gk k is the batch gradient g Sk k = 1 |Sk| ∑ i∈Sk ∇f ( wk;x , y ) and Hk is the inverse BFGS Hessian approximation Hk+1 =V T k HkVk + ρksks T k ρk = 1 yT k sk , Vk = 1− ρkyksk • to ensure consistent curvature pair updates sk+1 = wk+1 − wk, yk+1 = gk k+1 − g Ok k where gk k+1 and g Ok k are gradients based on the overlapping samples only Ok = Sk ∩ Sk+1 Sample selection:", "title": "" }, { "docid": "c8dc167294292425ac070c6fa56e65c5", "text": "Alongside developing systems for scalable machine learning and collaborative data science activities, there is an increasing trend toward publicly shared data science projects, hosted in general or dedicated hosting services, such as GitHub and DataHub. The artifacts of the hosted projects are rich and include not only text files, but also versioned datasets, trained models, project documents, etc. Under the fast pace and expectation of data science activities, model discovery, i.e., finding relevant data science projects to reuse, is an important task in the context of data management for end-to-end machine learning. In this paper, we study the task and present the ongoing work on ModelHub Discovery, a system for finding relevant models in hosted data science projects. Instead of prescribing a structured data model for data science projects, we take an information retrieval approach by decomposing the discovery task into three major steps: project query and matching, model comparison and ranking, and processing and building ensembles with returned models. We describe the motivation and desiderata, propose techniques, and present opportunities and challenges for model discovery for hosted data science projects.", "title": "" }, { "docid": "3bba595fa3a3cd42ce9b3ca052930d55", "text": "After about a decade of intense research, spurred by both economic and operational considerations, and by environmental concerns, energy efficiency has now become a key pillar in the design of communication networks. With the advent of the fifth generation of wireless networks, with millions more base stations and billions of connected devices, the need for energy-efficient system design and operation will be even more compelling. This survey provides an overview of energy-efficient wireless communications, reviews seminal and recent contribution to the state-of-the-art, including the papers published in this special issue, and discusses the most relevant research challenges to be addressed in the future.", "title": "" }, { "docid": "0a58548ceecaa13e1c77a96b4d4685c4", "text": "Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of 0.791 was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of 0.858. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of r = 0.826.", "title": "" }, { "docid": "c0584e11a64c6679ad43a0a91d92740d", "text": "A challenge in teaching usability engineering is providing appropriate hands-on project experience. Students need projects that are realistic enough to address meaningful issues, but manageable within one semester. We describe our use of online case studies to motivate and model course projects in usability engineering. The cases illustrate scenario-based usability methods, and are accessed via a custom browser. We summarize the content and organization of the case studies, several case-based learning activities, and students' reactions to the activities. We conclude with a discussion of future directions for case studies in HCI education.", "title": "" }, { "docid": "62b6c1caae1ff1e957a5377692898299", "text": "We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.", "title": "" }, { "docid": "10e88f0d1a339c424f7e0b8fa5b43c1e", "text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date", "title": "" }, { "docid": "611eacd767f1ea709c1c4aca7acdfcdb", "text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.", "title": "" }, { "docid": "b8b7abcef8e23f774bd4e74067a27e6f", "text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright  1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA", "title": "" }, { "docid": "6876748abb097dcce370288388e0965c", "text": "The design and manufacturing of pop-up books are mainly manual at present, but a number of the processes therein can benefit from computerization and automation. This paper studies one aspect of the design of pop-up books: the mathematical modelling and simulation of the pieces popping up as a book is opened. It developes the formulae for the essential parameters in the pop-up animation. This animation enables the designer to determine on a computer if a particular set-up is appropriate to the theme which the page is designed to express, removing the need for the laborious and time-consuming task of making manual prototypes", "title": "" }, { "docid": "1de3364e104a85af05f4a910ede83109", "text": "Activity theory holds that the human mind is the product of our interaction with people and artifacts in the context of everyday activity. Acting with Technology makes the case for activity theory as a basis for...", "title": "" }, { "docid": "05145a1f9f1d1423acb705159ec28f5e", "text": "We describe the first sub-quadratic sampling algorithm for the Multiplicative Attribute Graph Model (MAGM) of Kim and Leskovec (2010). We exploit the close connection between MAGM and the Kronecker Product Graph Model (KPGM) of Leskovec et al. (2010), and show that to sample a graph from a MAGM it suffices to sample small number of KPGM graphs and quilt them together. Under a restricted set of technical conditions our algorithm runs in O ( (log2(n)) 3 |E| ) time, where n is the number of nodes and |E| is the number of edges in the sampled graph. We demonstrate the scalability of our algorithm via extensive empirical evaluation; we can sample a MAGM graph with 8 million nodes and 20 billion edges in under 6 hours.", "title": "" }, { "docid": "79e6d47a27d8271ae0eaa0526df241a7", "text": "A DC-DC buck converter capable of handling loads from 20 μA to 100 mA and operating off a 2.8-4.2 V battery is implemented in a 45 nm CMOS process. In order to handle high battery voltages in this deeply scaled technology, multiple transistors are stacked in the power train. Switched-Capacitor DC-DC converters are used for internal rail generation for stacking and supplies for control circuits. An I-C DAC pulse width modulator with sleep mode control is proposed which is both area and power-efficient as compared with previously published pulse width modulator schemes. Both pulse frequency modulation (PFM) and pulse width modulation (PWM) modes of control are employed for the wide load range. The converter achieves a peak efficiency of 75% at 20 μA, 87.4% at 12 mA in PFM, and 87.2% at 53 mA in PWM.", "title": "" }, { "docid": "715e5655651ed879f2439ed86e860bc9", "text": "This paper presents a new permanent-magnet gear based on the cycloid gearing principle, which normally is characterized by an extreme torque density and a very high gearing ratio. An initial design of the proposed magnetic gear was designed, analyzed, and optimized with an analytical model regarding torque density. The results were promising as compared to other high-performance magnetic-gear designs. A test model was constructed to verify the analytical model.", "title": "" }, { "docid": "2bb36d78294b15000b78acd7a0831762", "text": "This study aimed to verify whether achieving a dist inctive academic performance is unlikely for students at high risk of smartphone addiction. Additionally, it verified whether this phenomenon was equally applicable to male and femal e students. After implementing systematic random sampling, 293 university students participated by completing an online survey questionnaire posted on the university’s stu dent information system. The survey questionnaire collected demographic information and responses to the Smartphone Addiction Scale-Short Version (SAS-SV) items. The results sho wed that male and female university students were equally susceptible to smartphone add iction. Additionally, male and female university students were equal in achieving cumulat ive GPAs with distinction or higher within the same levels of smartphone addiction. Fur thermore, undergraduate students who were at a high risk of smartphone addiction were le ss likely to achieve cumulative GPAs of distinction or higher.", "title": "" } ]
scidocsrr
3044c0fae6c720c091ba4b7260555350
OPTIMIZATION OF A WAVE CANCELLATION MULTIHULL SHIP USING CFD TOOLS
[ { "docid": "7e1c5f17ac930b3582b4dd696785bcf5", "text": "Four methods of analysis | a nonlinear method based on Euler's equations and three linear potential ow methods | are used to determine the optimal location of the outer hulls for a wave cancellation multihull ship that consists of a main center hull and two outer hulls. The three potential ow methods correspond to a hierarchy of simple approximations based on the Fourier-Kochin representation of ship waves and the slender-ship approximation.", "title": "" } ]
[ { "docid": "316cdb1b9f67f4156931a9d2eb06145c", "text": "Irony is an important device in human communication, both in everyday spoken conversations as well as in written texts including books, websites, chats, reviews, and Twitter messages among others. Specific cases of irony and sarcasm have been studied in different contexts but, to the best of our knowledge, only recently the first publicly available corpus including annotations about whether a text is ironic or not has been published by Filatova (2012). However, no baseline for classification of ironic or sarcastic reviews has been provided. With this paper, we aim at closing this gap. We formulate the problem as a supervised classification task and evaluate different classifiers, reaching an F1-measure of up to 74 % using logistic regression. We analyze the impact of a number of features which have been proposed in previous research as well as combinations of them.", "title": "" }, { "docid": "f268718ceac79dbf8d0dcda2ea6557ca", "text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.06.003 ⇑ Corresponding author. E-mail addresses: fred.qi@ieee.org (F. Qi), gmshi@x 1 Principal corresponding author. Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "41a0681812527ef288ac4016550e53dd", "text": "Supervised learning using deep convolutional neural network has shown its promise in large-scale image classification task. As a building block, it is now well positioned to be part of a larger system that tackles real-life multimedia tasks. An unresolved issue is that such model is trained on a static snapshot of data. Instead, this paper positions the training as a continuous learning process as new classes of data arrive. A system with such capability is useful in practical scenarios, as it gradually expands its capacity to predict increasing number of new classes. It is also our attempt to address the more fundamental issue: a good learning system must deal with new knowledge that it is exposed to, much as how human do.\n We developed a training algorithm that grows a network not only incrementally but also hierarchically. Classes are grouped according to similarities, and self-organized into levels. The newly added capacities are divided into component models that predict coarse-grained superclasses and those return final prediction within a superclass. Importantly, all models are cloned from existing ones and can be trained in parallel. These models inherit features from existing ones and thus further speed up the learning. Our experiment points out advantages of this approach, and also yields a few important open questions.", "title": "" }, { "docid": "3c891452e416c5faa3da8b6e32a57b3f", "text": "Linear support vector machines (svms) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel svms. We generalise this model to locally finite dimensional kernel svm.", "title": "" }, { "docid": "e2988860c1e8b4aebd6c288d37d1ca4e", "text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.", "title": "" }, { "docid": "76def4ca02a25669610811881531e875", "text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.", "title": "" }, { "docid": "09dbfbd77307b0cd152772618c40e083", "text": "Textbook Question Answering (TQA) [1] is a newly proposed task to answer arbitrary questions in middle school curricula, which has particular challenges to understand the long essays in additional to the images. Bilinear models [2], [3] are effective at learning high-level associations between questions and images, but are inefficient to handle the long essays. In this paper, we propose an Essay-anchor Attentive Multi-modal Bilinear pooling (EAMB), a novel method to encode the long essays into the joint space of the questions and images. The essay-anchors, embedded from the keywords, represent the essay information in a latent space. We propose a novel network architecture to pay special attention on the keywords in the questions, consequently encoding the essay information into the question features, and thus the joint space with the images. We then use the bilinear models to extract the multi-modal interactions to obtain the answers. EAMB successfully utilizes the redundancy of the pre-trained word embedding space to represent the essay-anchors. This avoids the extra learning difficulties from exploiting large network structures. Quantitative and qualitative experiments show the outperforming effects of EAMB on the TQA dataset.", "title": "" }, { "docid": "c36986dd83276fe01e73a4125f99f7c0", "text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.", "title": "" }, { "docid": "4004ab452ef58403af00bcf16c34e227", "text": "Wheel-spinning refers to a phenomenon in which a student has spent a considerable amount of time practicing a skill, yet displays little or no progress towards mastery. Wheel-spinning has been shown to be a common problem affecting a significant number of students in different tutoring systems and is negatively associated with learning. In this study, we construct a model of wheel-spinning, using generic features easily calculated from most tutoring systems. We show that for two different systems' data, the model generalizes to future students very well and can detect wheel-spinning in an early stage with high accuracy. We also refine the scope of the wheel-spinning problem in two systems using the model's predictions.", "title": "" }, { "docid": "e91622bf18d268991b6e15936574bc7e", "text": "This essay addresses the question of how participatory design (PD) researchers and practitioners can pursue commitments to social justice and democracy while retaining commitments to reflective practice, the voices of the marginal, and design experiments “in the small.” I argue that contemporary feminist utopianism has, on its own terms, confronted similar issues, and I observe that it and PD pursue similar agendas, but with complementary strengths. I thus propose a cooperative engagement between feminist utopianism and PD at the levels of theory, methodology, and on-the-ground practice. I offer an analysis of a case—an urban renewal project in Taipei, Taiwan—as a means of exploring what such a cooperative engagement might entail. I argue that feminist utopianism and PD have complementary strengths that could be united to develop and to propose alternative futures that reflect democratic values and procedures, emerging technologies and infrastructures as design materials, a commitment to marginalized voices (and the bodies that speak them), and an ambitious, even literary, imagination.", "title": "" }, { "docid": "ea5a455bca9ff0dbb1996bd97d89dfe5", "text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.", "title": "" }, { "docid": "2c0b13b5a1a4c207d52e674a518bf868", "text": "We have developed a new mutual information-based registration method for matching unlabeled point features. In contrast to earlier mutual information-based registration methods, which estimate the mutual information using image intensity information, our approach uses the point feature location information. A novel aspect of our approach is the emergence of correspondence (between the two sets of features) as a natural by-product of joint density estimation. We have applied this algorithm to the problem of geometric alignment of primate autoradiographs. We also present preliminary results on three-dimensional robust matching of sulci derived from anatomical magnetic resonance images. Finally, we present an experimental comparison between the mutual information approach and other recent approaches which explicitly parameterize feature correspondence.", "title": "" }, { "docid": "d97669811124f3c6f4cef5b2a144a46c", "text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.", "title": "" }, { "docid": "5ec58d07ad4b92fa14d541c696ee48fd", "text": "We present Jabberwocky, a social computing stack that consists of three components: a human and machine resource management system called Dormouse, a parallel programming framework for human and machine computation called ManReduce, and a high-level programming language on top of ManReduce called Dog. Dormouse is designed to enable cross-platform programming languages for social computation, so, for example, programs written for Mechanical Turk can also run on other crowdsourcing platforms. Dormouse also enables a programmer to easily combine crowdsourcing platforms or create new ones. Further, machines and people are both first-class citizens in Dormouse, allowing for natural parallelization and control flows for a broad range of data-intensive applications. And finally and importantly, Dormouse includes notions of real identity, heterogeneity, and social structure. We show that the unique properties of Dormouse enable elegant programming models for complex and useful problems, and we propose two such frameworks. ManReduce is a framework for combining human and machine computation into an intuitive parallel data flow that goes beyond existing frameworks in several important ways, such as enabling functions on arbitrary communication graphs between human and machine clusters. And Dog is a high-level procedural language written on top of ManReduce that focuses on expressivity and reuse. We explore two applications written in Dog: bootstrapping product recommendations without purchase data, and expert labeling of medical images.", "title": "" }, { "docid": "1ff317c5514dfc1179ee7c474187d4e5", "text": "The emergence and spread of antibiotic resistance among pathogenic bacteria has been a rising problem for public health in recent decades. It is becoming increasingly recognized that not only antibiotic resistance genes (ARGs) encountered in clinical pathogens are of relevance, but rather, all pathogenic, commensal as well as environmental bacteria-and also mobile genetic elements and bacteriophages-form a reservoir of ARGs (the resistome) from which pathogenic bacteria can acquire resistance via horizontal gene transfer (HGT). HGT has caused antibiotic resistance to spread from commensal and environmental species to pathogenic ones, as has been shown for some clinically important ARGs. Of the three canonical mechanisms of HGT, conjugation is thought to have the greatest influence on the dissemination of ARGs. While transformation and transduction are deemed less important, recent discoveries suggest their role may be larger than previously thought. Understanding the extent of the resistome and how its mobilization to pathogenic bacteria takes place is essential for efforts to control the dissemination of these genes. Here, we will discuss the concept of the resistome, provide examples of HGT of clinically relevant ARGs and present an overview of the current knowledge of the contributions the various HGT mechanisms make to the spread of antibiotic resistance.", "title": "" }, { "docid": "8a363d7fa2bbf4b30312ca9efc2b3fa5", "text": "The objective of the present study was to investigate whether transpedicular bone grafting as a supplement to posterior pedicle screw fixation in thoracolumbar fractures results in a stable reconstruction of the anterior column, that allows healing of the fracture without loss of correction. Posterior instrumentation using an internal fixator is a standard procedure for stabilizing the injured thoracolumbar spine. Transpedicular bone grafting was first described by Daniaux in 1986 to achieve intrabody fusion. Pedicle screw fixation with additional transpedicular fusion has remained controversial because of inconsistent reports. A retrospective single surgeon cohort study was performed. Between October 2001 and May 2007, 30 consecutive patients with 31 acute traumatic burst fractures of the thoracolumbar spine (D12-L5) were treated operatively. The mean age of the patients was 45.7 years (range: 19-78). There were 23 men and 7 women. Nineteen thoracolumbar fractures were sustained in falls from a height; the other fractures were the result of motor vehicle accidents. The vertebrae most often involved were L1 in 13 patients and L2 in 8 patients. According to the Magerl classification, 25 patients sustained Type A1, 4 Type A2 and 2 Type A3 fractures. The mean time from injury to surgery was 6 days (range 2-14 days). Two postoperative complications were observed: one superficial and one deep infection. Mean Cobb's angle improved from +7.16 degrees (SD 12.44) preoperatively to -5.48 degrees (SD 11.44) immediately after operation, with a mean loss of correction of 1.00 degrees (SD 3.04) at two years. Reconstruction of the anterior column is important to prevent loss of correction. In our experience, the use of transpedicular bone grafting has efficiently restored the anterior column and has preserved the post-operative correction of kyphosis until healing of the fracture.", "title": "" }, { "docid": "780ffc42f3a9a49dc0c6dcba26be33a5", "text": "Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term stateregularization, makes RNNs transition between a finite set of learnable states. We evaluate stateregularized RNNs on (1) regular languages for the purpose of automata extraction; (2) nonregular languages such as balanced parentheses, palindromes, and the copy task where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition, and language modeling. We show that state-regularization (a) simplifies the extraction of finite state automata modeling an RNN’s state transition dynamics; (b) forces RNNs to operate more like automata with external memory and less like finite state machines; (c) makes RNNs have better interpretability and explainability.", "title": "" }, { "docid": "1ebb4d7a99734f19fc202971b46bf568", "text": "Carnegie Mellon University has proposed an educationaland entertainment-based robotic lunar mission which will last two years and cover 1000km on the moon and revisit several historic sites. With the transmission of live panoramic video, participants will be provided the opportunity for interactively exploring the moon through teleoperation and telepresence. The requirement of panoramic video and telepresence demands high data rates on the order of 7.5 Mbps. This is challenging since the power available for communication is approximately 100W and occupied bandwidth is limited to less than 10 MHz. The tough environment on the moon introduces additional challenges of survivability and reliability. A communication system based on a phased array antenna, Nyquist QPSK modulation and a rate 2/3 Turbo code is presented which can satisfy requirements of continuous high data rate communication at low power and bandwidth reliably over a two year mission duration. Three ground stations with 22m parabolic antennas are required around the world to maintain continuous communication. The transmission will then be relayed via satellite to the current control station location. This paper presents an overview of the mission, and communication requirements and design.", "title": "" }, { "docid": "2f7b0229fc9e126e09abe769d2b927dc", "text": "Complex event processing has become increasingly important in modern applications, ranging from supply chain management for RFID tracking to real-time intrusion detection. The goal is to extract patterns from such event streams in order to make informed decisions in real-time. However, networking latencies and even machine failure may cause events to arrive out-of-order at the event stream processing engine. In this work, we address the problem of processing event pattern queries specified over event streams that may contain out-of-order data. First, we analyze the problems state-of-the-art event stream processing technology would experience when faced with out-of-order data arrival. We then propose a new solution of physical implementation strategies for the core stream algebra operators such as sequence scan and pattern construction, including stack- based data structures and associated purge algorithms. Optimizations for sequence scan and construction as well as state purging to minimize CPU cost and memory consumption are also introduced. Lastly, we conduct an experimental study demonstrating the effectiveness of our approach.", "title": "" } ]
scidocsrr
5ed37be0e4f614c80f76470b8848c91b
Automatic repair of buggy if conditions and missing preconditions with SMT
[ { "docid": "57d0e046517cc669746d4ecda352dc3f", "text": "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.", "title": "" }, { "docid": "2cb8ef67eb09f9fdd8c07e562cff6996", "text": "Patch generation is an essential software maintenance task because most software systems inevitably have bugs that need to be fixed. Unfortunately, human resources are often insufficient to fix all reported and known bugs. To address this issue, several automated patch generation techniques have been proposed. In particular, a genetic-programming-based patch generation technique, GenProg, proposed by Weimer et al., has shown promising results. However, these techniques can generate nonsensical patches due to the randomness of their mutation operations. To address this limitation, we propose a novel patch generation approach, Pattern-based Automatic program Repair (PAR), using fix patterns learned from existing human-written patches. We manually inspected more than 60,000 human-written patches and found there are several common fix patterns. Our approach leverages these fix patterns to generate program patches automatically. We experimentally evaluated PAR on 119 real bugs. In addition, a user study involving 89 students and 164 developers confirmed that patches generated by our approach are more acceptable than those generated by GenProg. PAR successfully generated patches for 27 out of 119 bugs, while GenProg was successful for only 16 bugs.", "title": "" }, { "docid": "5680257be3ac330b19645017953f6fb4", "text": "Debugging consumes significant time and effort in any major software development project. Moreover, even after the root cause of a bug is identified, fixing the bug is non-trivial. Given this situation, automated program repair methods are of value. In this paper, we present an automated repair method based on symbolic execution, constraint solving and program synthesis. In our approach, the requirement on the repaired code to pass a given set of tests is formulated as a constraint. Such a constraint is then solved by iterating over a layered space of repair expressions, layered by the complexity of the repair code. We compare our method with recently proposed genetic programming based repair on SIR programs with seeded bugs, as well as fragments of GNU Coreutils with real bugs. On these subjects, our approach reports a higher success-rate than genetic programming based repair, and produces a repair faster.", "title": "" } ]
[ { "docid": "85cdebb26246db1d5a9e6094b0a0c2e6", "text": "The fast simulation of large networks of spiking neurons is a major task for the examination of biology-inspired vision systems. Networks of this type label features by synchronization of spikes and there is strong demand to simulate these e,ects in real world environments. As the calculations for one model neuron are complex, the digital simulation of large networks is not e>cient using existing simulation systems. Consequently, it is necessary to develop special simulation techniques. This article introduces a wide range of concepts for the di,erent parts of digital simulator systems for large vision networks and presents accelerators based on these foundations. c © 2002 Elsevier Science B.V. All rights", "title": "" }, { "docid": "7b93d57ea77d234c507f8d155e518ebc", "text": "A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.", "title": "" }, { "docid": "c4912e6187e5e64ec70dd4423f85474a", "text": "Communication technologies are becoming increasingly diverse in form and functionality, making it important to identify which aspects of these technologies actually improve geographically distributed communication. Our study examines two potentially important aspects of communication technologies which appear in robot-mediated communication - physical embodiment and control of this embodiment. We studied the impact of physical embodiment and control upon interpersonal trust in a controlled laboratory experiment using three different videoconferencing settings: (1) a handheld tablet controlled by a local user, (2) an embodied system controlled by a local user, and (3) an embodied system controlled by a remote user (n = 29 dyads). We found that physical embodiment and control by the local user increased the amount of trust built between partners. These results suggest that both physical embodiment and control of the system influence interpersonal trust in mediated communication and have implications for future system designs.", "title": "" }, { "docid": "5eb63e991a00290d5892d010d0b28fef", "text": "In this paper we investigate deceptive defense strategies for web servers. Web servers are widely exploited resources in the modern cyber threat landscape. Often these servers are exposed in the Internet and accessible for a broad range of valid as well as malicious users. Common security strategies like firewalls are not sufficient to protect web servers. Deception based Information Security enables a large set of counter measures to decrease the efficiency of intrusions. In this work we depict several techniques out of the reconnaissance process of an attacker. We match these with deceptive counter measures. All proposed measures are implemented in an experimental web server with deceptive counter measure abilities. We also conducted an experiment with honeytokens and evaluated delay strategies against automated scanner tools.", "title": "" }, { "docid": "397f1c1a01655098d8b35b04011400c7", "text": "Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.", "title": "" }, { "docid": "0084d9c69d79a971e7139ab9720dd846", "text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.", "title": "" }, { "docid": "67826169bd43d22679f93108aab267a2", "text": "Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging –this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise –this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.", "title": "" }, { "docid": "fe6f1234505ddf5fab14cd22119b8388", "text": "This paper deals with identifying the genre of a movie by analyzing just the visual features of its trailer. This task seems to be very trivial for a human; our endeavor is to create a vision system that can do the same, accurately. We discuss the approaches we take and our experimental observations. The contributions of this work are : (1) we propose a neural network (based on VGG) that can classify movie trailers based on their genres; (2) we release a curated dataset, called YouTube-Trailer Dataset, which has over 800 movie trailers spanning over 4 genres. We achieve an accuracy of 80.1% with the spatial features, and 85% with using LSTM and set these results as the benchmark for this dataset. We have made the source code publicly available.1", "title": "" }, { "docid": "db9e922bcdffffc6586d10fa363b2e2d", "text": "Mallomonas eoa TAKAHASHII was first described by TAKAHASHII, who found the alga in ditches at Tsuruoka Parc, North-East Japan (TAKAHASHII 1960, 1963, ASMUND & TAKAHASHII 1969) . He studied the alga by transmission electron microscopy and described its different kinds of scales . However, he did not report the presence of any cysts . In the spring of 1971 a massive development of Mallomonas occurred under the ice in Lake Trummen, central South Sweden . Scanning electron microscopy revealed that the predominant species consisted of Mallomonas eoa TAKAHASHII, which occurred together with Synura petersenii KoRSHIKOV . In April the cells of Mallomonas eoa developed cysts and were studied by light microscopy and scanning electron microscopy. In contrast with earlier techniques the scanning electron microscopy made it possible to study the structure of the scales in various parts of the cell and to relate the cysts to the cells . Such knowledge is of importance also for paleolimnological research . Data on the quantitative and qualitative findings are reported below .", "title": "" }, { "docid": "f472388e050e80837d2d5129ba8a358b", "text": "Voice control has emerged as a popular method for interacting with smart-devices such as smartphones, smartwatches etc. Popular voice control applications like Siri and Google Now are already used by a large number of smartphone and tablet users. A major challenge in designing a voice control application is that it requires continuous monitoring of user?s voice input through the microphone. Such applications utilize hotwords such as \"Okay Google\" or \"Hi Galaxy\" allowing them to distinguish user?s voice command and her other conversations. A voice control application has to continuously listen for hotwords which significantly increases the energy consumption of the smart-devices.\n To address this energy efficiency problem of voice control, we present AccelWord in this paper. AccelWord is based on the empirical evidence that accelerometer sensors found in today?s mobile devices are sensitive to user?s voice. We also demonstrate that the effect of user?s voice on accelerometer data is rich enough so that it can be used to detect the hotwords spoken by the user. To achieve the goal of low energy cost but high detection accuracy, we combat multiple challenges, e.g. how to extract unique signatures of user?s speaking hotwords only from accelerometer data and how to reduce the interference caused by user?s mobility.\n We finally implement AccelWord as a standalone application running on Android devices. Comprehensive tests show AccelWord has hotword detection accuracy of 85% in static scenarios and 80% in mobile scenarios. Compared to the microphone based hotword detection applications such as Google Now and Samsung S Voice, AccelWord is 2 times more energy efficient while achieving the accuracy of 98% and 92% in static and mobile scenarios respectively.", "title": "" }, { "docid": "de45682fcc57257365ae2a35978b8694", "text": "Colloidal particles play an important role in various areas of material and pharmaceutical sciences, biotechnology, and biomedicine. In this overview we describe micro- and nano-particles used for the preparation of polyelectrolyte multilayer capsules and as drug delivery vehicles. An essential feature of polyelectrolyte multilayer capsule preparations is the ability to adsorb polymeric layers onto colloidal particles or templates followed by dissolution of these templates. The choice of the template is determined by various physico-chemical conditions: solvent needed for dissolution, porosity, aggregation tendency, as well as release of materials from capsules. Historically, the first templates were based on melamine formaldehyde, later evolving towards more elaborate materials such as silica and calcium carbonate. Their advantages and disadvantages are discussed here in comparison to non-particulate templates such as red blood cells. Further steps in this area include development of anisotropic particles, which themselves can serve as delivery carriers. We provide insights into application of particles as drug delivery carriers in comparison to microcapsules templated on them.", "title": "" }, { "docid": "8b5bf8cf3832ac9355ed5bef7922fb5c", "text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.", "title": "" }, { "docid": "374f64916e84c01c0a6df6629ab02dbd", "text": "NASA Glenn Research Center, in collaboration with the aerospace industry and academia, has begun the development of technology for a future hybrid-wing body electric airplane with a turboelectric distributed propulsion (TeDP) system. It is essential to design a subscale system to emulate the TeDP power grid, which would enable rapid analysis and demonstration of the proof-of-concept of the TeDP electrical system. This paper describes how small electrical machines with their controllers can emulate all the components in a TeDP power train. The whole system model in Matlab/Simulink was first developed and tested in simulation, and the simulation results showed that system dynamic characteristics could be implemented by using the closed-loop control of the electric motor drive systems. Then we designed a subscale experimental system to emulate the entire power system from the turbine engine to the propulsive fans. Firstly, we built a system to emulate a gas turbine engine driving a generator, consisting of two permanent magnet (PM) motors with brushless motor drives, coupled by a shaft. We programmed the first motor and its drive to mimic the speed-torque characteristic of the gas turbine engine, while the second motor and drive act as a generator and produce a torque load on the first motor. Secondly, we built another system of two PM motors and drives to emulate a motor driving a propulsive fan. We programmed the first motor and drive to emulate a wound-rotor synchronous motor. The propulsive fan was emulated by implementing fan maps and flight conditions into the fourth motor and drive, which produce a torque load on the driving motor. The stator of each PM motor is designed to travel axially to change the coupling between rotor and stator. This feature allows the PM motor to more closely emulate a wound-rotor synchronous machine. These techniques can convert the plain motor system into a unique TeDP power grid emulator that enables real-time simulation performance using hardware-in-the-loop (HIL).", "title": "" }, { "docid": "43ff7d61119cc7b467c58c9c2e063196", "text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9970a23aedeb1a613a0909c28c35222e", "text": "Imaging radars incorporating digital beamforming (DBF) typically require a uniform linear antenna array (ULA). However, using a large number of parallel receivers increases system complexity and cost. A switched antenna array can provide a similar performance at a lower expense. This paper describes an active switched antenna array with 32 integrated planar patch antennas illuminating a cylindrical lens. The array can be operated over a frequency range from 73 GHz–81 GHz. Together with a broadband FMCW frontend (Frequency Modulated Continuous Wave) a DBF radar was implemented. The design of the array is presented together with measurement results.", "title": "" }, { "docid": "e0b1056544c3dc5c3b6f5bc072a72831", "text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.", "title": "" }, { "docid": "adeb7bdbe9e903ae7041f93682b0a27c", "text": "Self -- Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.", "title": "" }, { "docid": "5b748e2bc26e3fab531f0f741f7de176", "text": "Computer models are widely used to simulate real processes. Within the computer model, there always exist some parameters which are unobservable in the real process but need to be specified in the computer model. The procedure to adjust these unknown parameters in order to fit the model to observed data and improve its predictive capability is known as calibration. In traditional calibration, once the optimal calibration parameter set is obtained, it is treated as known for future prediction. Calibration parameter uncertainty introduced from estimation is not accounted for. We will present a Bayesian calibration approach for stochastic computer models. We account for these additional uncertainties and derive the predictive distribution for the real process. Two numerical examples are used to illustrate the accuracy of the proposed method.", "title": "" }, { "docid": "83580c373e9f91b021d90f520011a5da", "text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.", "title": "" } ]
scidocsrr
0ec72d6eee7c539c0883c5f3977df19c
The Factor Structure of the System Usability Scale
[ { "docid": "19a28d8bbb1f09c56f5c85be003a9586", "text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.", "title": "" }, { "docid": "6deff83de8ad1e0d08565129c5cefb8a", "text": "Correlations between prototypical usability metrics from 90 distinct usability tests were strong when measured at the task-level (r between .44 and .60). Using test-level satisfaction ratings instead of task-level ratings attenuated the correlations (r between .16 and .24). The method of aggregating data from a usability test had a significant effect on the magnitude of the resulting correlations. The results of principal components and factor analyses on the prototypical usability metrics provided evidence for an underlying construct of general usability with objective and subjective factors.", "title": "" } ]
[ { "docid": "2b38ac7d46a1b3555fef49a4e02cac39", "text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "title": "" }, { "docid": "ba7701a94880b59bbbd49fbfaca4b8c3", "text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.", "title": "" }, { "docid": "392a683cf9fdbd18c2ac6a46962a9911", "text": "Recently, reinforcement learning has been successfully applied to the logical game of Go, various Atari games, and even a 3D game, Labyrinth, though it continues to have problems in sparse reward settings. It is difficult to explore, but also difficult to exploit, a small number of successes when learning policy. To solve this issue, the subgoal and option framework have been proposed. However, discovering subgoals online is too expensive to be used to learn options in large state spaces. We propose Micro-objective learning (MOL) to solve this problem. The main idea is to estimate how important a state is while training and to give an additional reward proportional to its importance. We evaluated our algorithm in two Atari games: Montezuma’s Revenge and Seaquest. With three experiments to each game, MOL significantly improved the baseline scores. Especially in Montezuma’s Revenge, MOL achieved two times better results than the previous state-of-the-art model.", "title": "" }, { "docid": "cae661146bc0156af25d8014cb61ef0b", "text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.", "title": "" }, { "docid": "3b1a0eafe36176031b6463af4d962036", "text": "Tasks that demand externalized attention reliably suppress default network activity while activating the dorsal attention network. These networks have an intrinsic competitive relationship; activation of one suppresses activity of the other. Consequently, many assume that default network activity is suppressed during goal-directed cognition. We challenge this assumption in an fMRI study of planning. Recent studies link default network activity with internally focused cognition, such as imagining personal future events, suggesting a role in autobiographical planning. However, it is unclear how goal-directed cognition with an internal focus is mediated by these opposing networks. A third anatomically interposed 'frontoparietal control network' might mediate planning across domains, flexibly coupling with either the default or dorsal attention network in support of internally versus externally focused goal-directed cognition, respectively. We tested this hypothesis by analyzing brain activity during autobiographical versus visuospatial planning. Autobiographical planning engaged the default network, whereas visuospatial planning engaged the dorsal attention network, consistent with the anti-correlated domains of internalized and externalized cognition. Critically, both planning tasks engaged the frontoparietal control network. Task-related activation of these three networks was anatomically consistent with independently defined resting-state functional connectivity MRI maps. Task-related functional connectivity analyses demonstrate that the default network can be involved in goal-directed cognition when its activity is coupled with the frontoparietal control network. Additionally, the frontoparietal control network may flexibly couple with the default and dorsal attention networks according to task domain, serving as a cortical mediator linking the two networks in support of goal-directed cognitive processes.", "title": "" }, { "docid": "c042edd05232a996a119bfbeba71422e", "text": "Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.", "title": "" }, { "docid": "e08bc715d679ba0442883b4b0e481998", "text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality", "title": "" }, { "docid": "ba533a610f95d44bf5416e17b07348dd", "text": "It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. These are fundamental to the analysis and processing of multiple images differing only in exposure. The \"gamma correction\" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. Thus, it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the \"amplitude domain\". The theoretical framework presented in this paper is applicable to the processing of images from nearly all types of modern cameras. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled \"Lightspace and the Wyckoff principle.\"", "title": "" }, { "docid": "b4803364e973142a82e1b3e5bea21f24", "text": "Word2Vec is a widely used algorithm for extracting low-dimensional vector representations of words. It generated considerable excitement in the machine learning and natural language processing (NLP) communities recently due to its exceptional performance in many NLP applications such as named entity recognition, sentiment analysis, machine translation and question answering. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures but are based on vector-vector operations that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we improve reuse of various data structures in the algorithm through the use of minibatching, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.", "title": "" }, { "docid": "472605bc322f1fd2c90ad50baf19fffb", "text": "Wireless sensor networks (WSNs) use the unlicensed industrial, scientific, and medical (ISM) band for transmissions. However, with the increasing usage and demand of these networks, the currently available ISM band does not suffice for their transmissions. This spectrum insufficiency problem has been overcome by incorporating the opportunistic spectrum access capability of cognitive radio (CR) into the existing WSN, thus giving birth to CR sensor networks (CRSNs). The sensor nodes in CRSNs depend on power sources that have limited power supply capabilities. Therefore, advanced and intelligent radio resource allocation schemes are very essential to perform dynamic and efficient spectrum allocation among sensor nodes and to optimize the energy consumption of each individual node in the network. Radio resource allocation schemes aim to ensure QoS guarantee, maximize the network lifetime, reduce the internode and internetwork interferences, etc. In this paper, we present a survey of the recent advances in radio resource allocation in CRSNs. Radio resource allocation schemes in CRSNs are classified into three major categories, i.e., centralized, cluster-based, and distributed. The schemes are further divided into several classes on the basis of performance optimization criteria that include energy efficiency, throughput maximization, QoS assurance, interference avoidance, fairness and priority consideration, and hand-off reduction. An insight into the related issues and challenges is provided, and future research directions are clearly identified.", "title": "" }, { "docid": "bfe76736623dfc3271be4856f5dc2eef", "text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.", "title": "" }, { "docid": "147b207125fcda1dece25a6c5cd17318", "text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.", "title": "" }, { "docid": "cb1c0c62269e96555119bd7f8cd666aa", "text": "The complexity of the visual world creates significant challenges for comprehensive visual understanding. In spite of recent successes in visual recognition, today’s vision systems would still struggle to deal with visual queries that require a deeper reasoning. We propose a knowledge base (KB) framework to handle an assortment of visual queries, without the need to train new classifiers for new tasks. Building such a large-scale multimodal KB presents a major challenge of scalability. We cast a large-scale MRF into a KB representation, incorporating visual, textual and structured data, as well as their diverse relations. We introduce a scalable knowledge base construction system that is capable of building a KB with half billion variables and millions of parameters in a few hours. Our system achieves competitive results compared to purpose-built models on standard recognition and retrieval tasks, while exhibiting greater flexibility in answering richer visual queries.", "title": "" }, { "docid": "a5cd94446abfc46c6d5c4e4e376f1e0a", "text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.", "title": "" }, { "docid": "1cf029e7284359e3cdbc12118a6d4bf5", "text": "Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association, and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle-filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsification in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance-based methods, and multihypothesis techniques. The third development discussed in this tutorial is the trend towards richer appearance-based models of landmarks and maps. While initially motivated by problems in data association and loop closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. The environment representation section surveys current developments in this area along a number of lines, including delayed mapping, the use of nongeometric landmarks, and trajectory estimation methods. SLAM methods have now reached a state of considerable maturity. Future challenges will center on methods enabling large-scale implementations in increasingly unstructured environments and especially in situations where GPS-like solutions are unavailable or unreliable: in urban canyons, under foliage, under water, or on remote planets.", "title": "" }, { "docid": "0171c8e352b5236ead1a59f38dffc94d", "text": "World Wide Web Consortium (W3C) is the international standards organization for the World Wide Web (www). It develops standards, specifications and recommendations to enhance the interoperability and maximize consensus about the content of the web and define major parts of what makes the World Wide Web work. Phishing is a type of Internet scams that seeks to get a user‟s credentials by fraud websites, such as passwords, credit card numbers, bank account details and other sensitive information. There are some characteristics in webpage source code that distinguish phishing websites from legitimate websites and violate the w3c standards, so we can detect the phishing attacks by check the webpage and search for these characteristics in the source code file if it exists or not. In this paper, we propose a phishing detection approach based on checking the webpage source code, we extract some phishing characteristics out of the W3C standards to evaluate the security of the websites, and check each character in the webpage source code, if we find a phishing character, we will decrease from the initial secure weight. Finally we calculate the security percentage based on the final weight, the high percentage indicates secure website and others indicates the website is most likely to be a phishing website. We check two webpage source codes for legitimate and phishing websites and compare the security percentages between them, we find the phishing website is less security percentage than the legitimate website; our approach can detect the phishing website based on checking phishing characteristics in the webpage source code.", "title": "" }, { "docid": "fdf979667641e1447f237eb25605c76b", "text": "A green synthesis of highly stable gold and silver nanoparticles (NPs) using arabinoxylan (AX) from ispaghula (Plantago ovata) seed husk is being reported. The NPs were synthesized by stirring a mixture of AX and HAuCl(4)·H(2)O or AgNO(3), separately, below 100 °C for less than an hour, where AX worked as the reducing and the stabilizing agent. The synthesized NPs were characterized by surface plasmon resonance (SPR) spectroscopy, transmission electron microscopy (TEM), atomic force microscopy (AFM), and X-ray diffraction (XRD). The particle size was (silver: 5-20 nm and gold: 8-30 nm) found to be dependent on pH, temperature, reaction time and concentrations of AX and the metal salts used. The NPs were poly-dispersed with a narrow range. They were stable for more than two years time.", "title": "" }, { "docid": "ecd99c9f87e1c5e5f529cb5fcbb206f2", "text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be", "title": "" }, { "docid": "5db5bed638cd8c5c629f9bebef556730", "text": "The health benefits of garlic likely arise from a wide variety of components, possibly working synergistically. The complex chemistry of garlic makes it plausible that variations in processing can yield quite different preparations. Highly unstable thiosulfinates, such as allicin, disappear during processing and are quickly transformed into a variety of organosulfur components. The efficacy and safety of these preparations in preparing dietary supplements based on garlic are also contingent on the processing methods employed. Although there are many garlic supplements commercially available, they fall into one of four categories, i.e., dehydrated garlic powder, garlic oil, garlic oil macerate and aged garlic extract (AGE). Garlic and garlic supplements are consumed in many cultures for their hypolipidemic, antiplatelet and procirculatory effects. In addition to these proclaimed beneficial effects, some garlic preparations also appear to possess hepatoprotective, immune-enhancing, anticancer and chemopreventive activities. Some preparations appear to be antioxidative, whereas others may stimulate oxidation. These additional biological effects attributed to AGE may be due to compounds, such as S-allylcysteine, S-allylmercaptocysteine, N(alpha)-fructosyl arginine and others, formed during the extraction process. Although not all of the active ingredients are known, ample research suggests that several bioavailable components likely contribute to the observed beneficial effects of garlic.", "title": "" }, { "docid": "2c4a2d41653f05060ff69f1c9ad7e1a6", "text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.", "title": "" } ]
scidocsrr
07e4b7aa9c45c55ad067b1c298f3bacb
Business Process Modeling: Current Issues and Future Challenges
[ { "docid": "49db1291f3f52a09037d6cfd305e8b5f", "text": "This paper examines cognitive beliefs and affect influencing one’s intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users’ continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users’ confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.", "title": "" }, { "docid": "f66854fd8e3f29ae8de75fc83d6e41f5", "text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.", "title": "" } ]
[ { "docid": "2620ce1c5ef543fded3a02dfb9e5c3f8", "text": "Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem. Like the chaos in real bee colony behavior, this paper proposes new ABC algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical ABC algorithm. Seven new chaotic ABC algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of ABC and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "465a5d9a2fe72abaf1fb8c2c041a9b64", "text": "A huge number of academic papers are coming out from a lot of conferences and journals these days. In these circumstances, most researchers rely on key-based search or browsing through proceedings of top conferences and journals to find their related work. To ease this difficulty, we propose a Personalized Academic Research Paper Recommendation System, which recommends related articles, for each researcher, that may be interesting to her/him. In this paper, we first introduce our web crawler to retrieve research papers from the web. Then, we define similarity between two research papers based on the text similarity between them. Finally, we propose our recommender system developed using collaborative filtering methods. Our evaluation results demonstrate that our system recommends good quality research papers.", "title": "" }, { "docid": "a8b5f7a5ab729a7f1664c5a22f3b9d9b", "text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.", "title": "" }, { "docid": "71a65ff432ae4b53085ca5c923c29a95", "text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.", "title": "" }, { "docid": "31e25f598fb15964358c482b6c37271f", "text": "Any bacterial population harbors a small number of phenotypic variants that survive exposure to high concentrations of antibiotic. Importantly, these so-called 'persister cells' compromise successful antibiotic therapy of bacterial infections and are thought to contribute to the development of antibiotic resistance. Intriguingly, drug-tolerant persisters have also been identified as a factor underlying failure of chemotherapy in tumor cell populations. Recent studies have begun to unravel the complex molecular mechanisms underlying persister formation and revolve around stress responses and toxin-antitoxin modules. Additionally, in vitro evolution experiments are revealing insights into the evolutionary and adaptive aspects of this phenotype. Furthermore, ever-improving experimental techniques are stimulating efforts to investigate persisters in their natural, infection-associated, in vivo environment. This review summarizes recent insights into the molecular mechanisms of persister formation, explains how persisters complicate antibiotic treatment of infections, and outlines emerging strategies to combat these tolerant cells.", "title": "" }, { "docid": "85c133eecada3c4bb25f96ad8127eec3", "text": "With the advent of brain computer interfaces based on real-time fMRI (rtfMRI-BCI), the possibility of performing neurofeedback based on brain hemodynamics has become a reality. In the early stage of the development of this field, studies have focused on the volitional control of activity in circumscribed brain regions. However, based on the understanding that the brain functions by coordinated activity of spatially distributed regions, there have recently been further developments to incorporate real-time feedback of functional connectivity and spatio-temporal patterns of brain activity. The present article reviews the principles of rtfMRI neurofeedback, its applications, benefits and limitations. A special emphasis is given to the discussion of novel developments that have enabled the use of this methodology to achieve self-regulation of the functional connectivity between different brain areas and of distributed brain networks, anticipating new and exciting applications for cognitive neuroscience and for the potential alleviation of neuropsychiatric disorders.", "title": "" }, { "docid": "7ec6147d4549f07c2d8c6c24566faa2f", "text": "Physical unclonable functions (PUFs) are security features that are based on process variations that occur during silicon chip fabrication. As PUFs are dependent on process variations, they need to be robust against reversible and irreversible temporal variabilities. In this paper, we present experimental results showing temporal variability in 4, 5, and 7-stage ring oscillator PUFs (ROPUFs). The reversible temporal variabilities are studied based on voltage and temperature variations, and the irreversible temporal variabilities are studied based on accelerated aging. Our results show that ROPUFs are sensitive to temperature and voltage variations regardless of the number of RO stages used. It is also observed that the aging, temperature, and voltage variation effects are observed to be uniformly distributed throughout the chip. This is evidenced by noting uniform changes in the RO frequency. Our results also show that most of the bit flips occur when the frequency difference in the RO pairs is low. This leads us to the conclusion that RO comparison pairs that pass high frequency threshold should be filtered to reduce temporal variabilities effect on the ROPUF. The experimental results also show that the 3-stage ROPUF has the lowest percentage of bit flip occurrences and the highest number of RO comparison pairs that pass high frequency threshold.", "title": "" }, { "docid": "439c0a4e0c2171c066a2a8286842f0c2", "text": "Capacitive rotary encoders are widely used in motor velocity and angular position control, where high-speed and high-precision angle calculation is required. This paper illustrates implementation of arctangent operation, based on the CORDIC (an acronym for COordinate Rotational DIgital Computer) algorithm, in the capacitive rotary encoder signal demodulation in an FPGA to obtain the motor velocity and position. By skipping some unnecessary rotation in CORDIC algorithm, we improve the algorithm's computing accuracy. Experiments show that the residue angle error is almost reduced by half after the CORDIC algorithm is optimized, and is completely meet the precision requirements of the system.", "title": "" }, { "docid": "d88059813c4064ec28c58a8ab23d3030", "text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.", "title": "" }, { "docid": "196868f85571b16815127d2bd87b98ff", "text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.", "title": "" }, { "docid": "675007890407b7e8a7d15c1255e77ec6", "text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.", "title": "" }, { "docid": "122bc83bcd27b95092c64cf1ad8ee6a8", "text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.", "title": "" }, { "docid": "ad2655aaed8a4f3379cb206c6e405f16", "text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.", "title": "" }, { "docid": "d60f812bb8036a2220dab8740f6a74c4", "text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.", "title": "" }, { "docid": "872d1f216a463b354221be8b68d35d96", "text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.", "title": "" }, { "docid": "73d58bbe0550fb58efc49ae5f84a1c7b", "text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.", "title": "" }, { "docid": "14dc7c8065adad3fc3c67f5a8e35298b", "text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions", "title": "" }, { "docid": "9c262b845fff31abd1cbc2932957030d", "text": "Dixon's method for computing multivariate resultants by simultaneously eliminating many variables is reviewed. The method is found to be quite restrictive because often the Dixon matrix is singular, and the Dixon resultant vanished identically yielding no information about solutions for many algebraic and geometry problems. We extend Dixon's method for the case when the Dixon matrix is singular, but satisfies a condition. An efficient algorithm is developed based on the proposed extension for extracting conditions for the existence of affine solutions of a finite set of polynomials. Using this algorithm, numerous geometric and algebraic identities are derived for examples which appear intractable with other techniques of triangulation such as the successive resultant method, the Gro¨bner basis method, Macaulay resultants and Characteristic set method. Experimental results suggest that the resultant of a set of polynomials which are symmetric in the variables is relatively easier to compute using the extended Dixon's method.", "title": "" }, { "docid": "73f605a48d0494d0007f242cee5c67ff", "text": "BACKGROUND\nLarge comparative studies that have evaluated long-term functional outcome of operatively treated ankle fractures are lacking. This study was performed to analyse the influence of several combinations of malleolar fractures on long-term functional outcome and development of osteoarthritis.\n\n\nMETHODS\nRetrospective cohort-study on operated (1995-2007) malleolar fractures. Results were assessed with use of the AAOS- and AOFAS-questionnaires, VAS-pain score, dorsiflexion restriction (range of motion) and osteoarthritis. Categorisation was determined using the number of malleoli involved.\n\n\nRESULTS\n243 participants with a mean follow-up of 9.6 years were included. Significant differences for all outcomes were found between unimalleolar (isolated fibular) and bimalleolar (a combination of fibular and medial) fractures (AOFAS 97 vs 91, p = 0.035; AAOS 97 vs 90, p = 0.026; dorsiflexion restriction 2.8° vs 6.7°, p = 0.003). Outcomes after fibular fractures with an additional posterior fragment were similar to isolated fibular fractures. However, significant differences were found between unimalleolar and trimalleolar (a combination of lateral, medial and posterior) fractures (AOFAS 97 vs 88, p < 0.001; AAOS 97 vs 90, p = 0.003; VAS-pain 1.1 vs 2.3 p < 0.001; dorsiflexion restriction 2.9° vs 6.9°, p < 0.001). There was no significant difference in isolated fibular fractures with or without additional deltoid ligament injury. In addition, no functional differences were found between bimalleolar and trimalleolar fractures. Surprisingly, poor outcomes were found for isolated medial malleolar fractures. Development of osteoarthritis occurred mainly in trimalleolar fractures with a posterior fragment larger than 5 %.\n\n\nCONCLUSIONS\nThe results of our study show that long-term functional outcome is strongly associated to medial malleolar fractures, isolated or as part of bi- or trimalleolar fractures. More cases of osteoarthritis are found in trimalleolar fractures.", "title": "" }, { "docid": "0f24b6c36586505c1f4cc001e3ddff13", "text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.", "title": "" } ]
scidocsrr
2bf7bad4ed4e1a9eccf935d41ea488cc
Towards View-point Invariant Person Re-identification via Fusion of Anthropometric and Gait Features from Kinect Measurements
[ { "docid": "96d5a0fb4bb0666934819d162f1b060c", "text": "Human gait is an important indicator of health, with applications ranging from diagnosis, monitoring, and rehabilitation. In practice, the use of gait analysis has been limited. Existing gait analysis systems are either expensive, intrusive, or require well-controlled environments such as a clinic or a laboratory. We present an accurate gait analysis system that is economical and non-intrusive. Our system is based on the Kinect sensor and thus can extract comprehensive gait information from all parts of the body. Beyond standard stride information, we also measure arm kinematics, demonstrating the wide range of parameters that can be extracted. We further improve over existing work by using information from the entire body to more accurately measure stride intervals. Our system requires no markers or battery-powered sensors, and instead relies on a single, inexpensive commodity 3D sensor with a large preexisting install base. We suggest that the proposed technique can be used for continuous gait tracking at home.", "title": "" }, { "docid": "caf0e4b601252125a65aaa7e7a3cba5a", "text": "Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occl usions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individual is of funda mental importance to the video analysis in large-scale network of cameras. This is the pers on reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effec tively address the challenges associated with changes in illumination, pose, and clothing a ppearance variation are discussed. More specifically, the development of a set of models that ca pture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some o th rs require an intermediate step where specific body parts need to be identified. Some ar e designed to extract appearance features over time, and some others can operate reliabl y also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrenc e matrices by leveraging a generalization of the integral representation of images. The alg orithms are deployed and tested in a camera network comprising of three cameras with non-overl apping field of views, where a multi-camera multi-target tracker links the tracks in dif ferent cameras by reidentifying the same people appearing in different views.", "title": "" } ]
[ { "docid": "b4f2cbda004ab3c0849f0fe1775c2a7a", "text": "This research investigates the influence of religious preference and practice on the use of contraception. Much of earlier research examines the level of religiosity on sexual activity. This research extends this reasoning by suggesting that peer group effects create a willingness to mask the level of sexuality through the use of contraception. While it is understood that certain religions, that is, Catholicism does not condone the use of contraceptives, this research finds that Catholics are more likely to use certain methods of contraception than other religious groups. With data on contraceptive use from the Center for Disease Control’s Family Growth Survey, a likelihood probability model is employed to investigate the impact religious affiliation on contraception use. Findings suggest a preference for methods that ensure non-pregnancy while preventing feelings of shame and condemnation in their religious communities.", "title": "" }, { "docid": "96b1688b19bf71e8f1981d9abe52fc2c", "text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.", "title": "" }, { "docid": "52c9ee7e057ff9ade5daf44ea713e889", "text": "In this work, we present a novel peak-piloted deep network (PPDN) that uses a sample with peak expression (easy sample) to supervise the intermediate feature responses for a sample of non-peak expression (hard sample) of the same type and from the same subject. The expression evolving process from nonpeak expression to peak expression can thus be implicitly embedded in the network to achieve the invariance to expression intensities.", "title": "" }, { "docid": "acaf692dc8abca626c51c65e79982a35", "text": "In this paper an impulse-radio ultra-wideband (IR-UWB) hardware demonstrator is presented, which can be used as a radar sensor for highly precise object tracking and breath-rate sensing. The hardware consists of an impulse generator integrated circuit (IC) in the transmitter and a correlator IC with an integrating baseband circuit as correlation receiver. The radiated impulse is close to a fifth Gaussian derivative impulse with σ = 51 ps, efficiently using the Federal Communications Commission indoor mask. A detailed evaluation of the hardware is given. For the tracking, an impulse train is radiated by the transmitter, and the reflections of objects in front of the sensor are collected by the receiver. With the reflected signals, a continuous hardware correlation is computed by a sweeping impulse correlation. The correlation is applied to avoid sampling of the RF impulse with picosecond precision. To localize objects precisely in front of the sensor, three impulse tracking methods are compared: Tracking of the maximum impulse peak, tracking of the impulse slope, and a slope-to-slope tracking of the object's reflection and the signal of the static direct coupling between transmit and receive antenna; the slope-to-slope tracking showing the best performance. The precision of the sensor is shown by a measurement with a metal plate of 1-mm sinusoidal deviation, which is clearly resolved. Further measurements verify the use of the demonstrated principle as a breathing sensor. The breathing signals of male humans and a seven-week-old infant are presented, qualifying the IR-UWB radar principle as a useful tool for breath-rate determination.", "title": "" }, { "docid": "b6fff873c084e9a44d870ffafadbc9e7", "text": "A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require their own permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user’s interaction and stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries, and providing services to validate the legitimacy of clicks, locally and remotely. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. AdSplit also supports a system resource that allows advertisements to display their content in an embedded HTML widget, without requiring any native code.", "title": "" }, { "docid": "71817d7adba74a7804767a5bc74e2d81", "text": "We propose a novel 3D integration method, called Vertical integration after Stacking (ViaS) process. The process enables 3D integration at significantly low cost, since it eliminates costly processing steps such as chemical vapor deposition used to form inorganic insulator layers and Cu plating used for via filling of vertical conductors. Furthermore, the technique does not require chemical-mechanical polishing (CMP) nor temporary bonding to handle thin wafers. The integration technique consists of forming through silicon via (TSV) holes in pre-multi-stacked wafers (> 2 wafers) which have no initial vertical electrical interconnections, followed by insulation of holes by polymer coating and via filling by molten metal injection. In the technique, multiple wafers are etched at once to form TSV holes followed by coating of the holes by conformal thin polymer layers. Finally the holes are filled by using molten metal injection so that a formation of interlayer connections of arbitrary choice is possible. In this paper, we demonstrate 3-chip-stacked test vehicle with 50 × 50 μm-square TSVs assembled by using this technique.", "title": "" }, { "docid": "fe536ac94342c96f6710afb4a476278b", "text": "The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.", "title": "" }, { "docid": "de7eb0735d6cd2fb13a00251d89b0fbc", "text": "Classical conditioning, the simplest form of associative learning, is one of the most studied paradigms in behavioural psychology. Since the formal description of classical conditioning by Pavlov, lesion studies in animals have identified a number of anatomical structures involved in, and necessary for, classical conditioning. In the 1980s, with the advent of functional brain imaging techniques, particularly positron emission tomography (PET), it has been possible to study the functional anatomy of classical conditioning in humans. The development of functional magnetic resonance imaging (fMRI)--in particular single-trial or event-related fMRI--has now considerably advanced the potential of neuroimaging for the study of this form of learning. Recent event-related fMRI and PET studies are adding crucial data to the current discussion about the putative role of the amygdala in classical fear conditioning in humans.", "title": "" }, { "docid": "b4c395b97f0482f3c1224ed6c8623ac2", "text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.", "title": "" }, { "docid": "978b1e9b3a5c4c92f265795a944e575d", "text": "The currently operational (March 1976) version of the INGRES database management system is described. This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes on top of the UNIX operating system for Digital Equipment Corporation PDP 11/40, 11/45, and 11/70 computers. Emphasis is on the design decisions and tradeoffs related to (1) structuring the system into processes, (2) embedding one command language in a general purpose programming language, (3) the algorithms implemented to process interactions, (4) the access methods implemented, (5) the concurrency and recovery control currently provided, and (6) the data structures used for system catalogs and the role of the database administrator.\nAlso discussed are (1) support for integrity constraints (which is only partly operational), (2) the not yet supported features concerning views and protection, and (3) future plans concerning the system.", "title": "" }, { "docid": "885bb14815738145ea531d51385e8fdb", "text": "In this paper we study how individual sensors can compress their observations in a privacy-preserving manner. We propose a randomized requantization scheme that guarantees local differential privacy, a strong model for privacy in which individual data holders must mask their information before sending it to an untrusted third party. For our approach, the problem becomes an optimization over discrete mem-oryless channels between the sensor observations and their compressed version. We show that for a fixed compression ratio, finding privacy-optimal channel subject to a distortion constraint is a quasiconvex optimization problem that can be solved by the bisection method. Our results indicate interesting tradeoffs between the privacy risk, compression ratio, and utility, or distortion. For example, in the low distortion regime, we can halve the bit rate at little cost in distortion while maintaining the same privacy level. We illustrate our approach for a simple example of privatizing and recompressing lowpass signals and show that it yields better tradeoffs than existing approaches based on noise addition. Our approach may be useful in several privacy-sensitive monitoring applications envisioned for the Internet of Things (IoT).", "title": "" }, { "docid": "b7dbf710a191e51dc24619b2a520cf31", "text": "This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.", "title": "" }, { "docid": "2f838f0268fb74912d264f35277fe589", "text": "OBJECTIVE\n: The objective of this study was to examine the histologic features of the labia minora, within the context of the female sexual response.\n\n\nMETHODS\n: Eight cadaver vulvectomy specimens were used for this study. All specimens were embedded in paraffin and were serially sectioned. Selected sections were stained with hematoxylin and eosin, elastic Masson trichrome, and S-100 antibody stains.\n\n\nRESULTS\n: The labia minora are thinly keratinized structures. The primary supporting tissue is collagen, with many vascular and neural elements structures throughout its core and elastin interspersed throughout.\n\n\nCONCLUSIONS\n: The labia minora are specialized, highly vascular folds of tissue with an abundance of neural elements. These features corroborate previous functional and observational data that the labia minora engorge with arousal and have a role in the female sexual response.", "title": "" }, { "docid": "82be11a0006f253a1cc3fd2ed85855c8", "text": "Knowledge base (KB) sharing among parties has been proven to be beneficial in several scenarios. However such sharing can arise considerable privacy concerns depending on the sensitivity of the information stored in each party's KB. In this paper, we focus on the problem of exporting a (part of a) KB of a party towards a receiving one. We introduce a novel solution that enables parties to export data in a privacy-preserving fashion, based on a probabilistic data structure, namely the \\emph{count-min sketch}. With this data structure, KBs can be exported in the form of key-value stores and inserted into a set of count-min sketches, where keys can be sensitive and values are counters. Count-min sketches can be tuned to achieve a given key collision probability, which enables a party to deny having certain keys in its own KB, and thus to preserve its privacy. We also introduce a metric, the γ-deniability (novel for count-min sketches), to measure the privacy level obtainable with a count-min sketch. Furthermore, since the value associated to a key can expose to linkage attacks, noise can be added to a count-min sketch to ensure controlled error on retrieved values. Key collisions and noise alter the values contained in the exported KB, and can affect negatively the accuracy of a computation performed on the exported KB. We explore the tradeoff between privacy preservation and computation accuracy by experimental evaluations in two scenarios related to malware detection.", "title": "" }, { "docid": "d1ff3f763fac877350d402402b29323c", "text": "The study of microstrip patch antennas has made great progress in recent years. Compared with conventional antennas, microstrip patch antennas have more advantages and better prospects. They are lighter in weight, low volume, low cost, low profile, smaller in dimension and ease of fabrication and conformity. Moreover, the microstrip patch antennas can provide dual and circular polarizations, dual-frequency operation, frequency agility, broad band-width, feedline flexibility, beam scanning omnidirectional patterning. In this paper we discuss the microstrip antenna, types of microstrip antenna, feeding techniques and application of microstrip patch antenna with their advantage and disadvantages over conventional microwave antennas.", "title": "" }, { "docid": "b4f19048d26c0620793da5f5422a865f", "text": "Interest in supply chain management has steadily increased since the 1980s when firms saw the benefits of collaborative relationships within and beyond their own organization. Firms are finding that they can no longer compete effectively in isolation of their suppliers or other entities in the supply chain. A number of definitions of supply chain management have been proposed in the literature and in practice. This paper defines the concept of supply chain management and discusses its historical evolution. The term does not replace supplier partnerships, nor is it a description of the logistics function. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Introduction to supply chain concepts Firms can no longer effectively compete in isolation of their suppliers and other entities in the supply chain. Interest in the concept of supply chain management has steadily increased since the 1980s when companies saw the benefits of collaborative relationships within and beyond their own organization. A number of definitions have been proposed concerning the concept of “the supply chain” and its management. This paper defines the concept of the supply chain and discusses the evolution of supply chain management. The term does not replace supplier partnerships, nor is it a description of the logistics function. Industry groups are now working together to improve the integrative processes of supply chain management and accelerate the benefits available through successful implementation. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Definition of supply chain Various definitions of a supply chain have been offered in the past several years as the concept has gained popularity. The APICS Dictionary describes the supply chain as: 1 the processes from the initial raw materials to the ultimate consumption of the finished product linking across supplieruser companies; and 2 the functions within and outside a company that enable the value chain to make products and provide services to the customer (Cox et al., 1995). Another source defines supply chain as, the network of entities through which material flows. Those entities may include suppliers, carriers, manufacturing sites, distribution centers, retailers, and customers (Lummus and Alber, 1997). The Supply Chain Council (1997) uses the definition: “The supply chain – a term increasingly used by logistics professionals – encompasses every effort involved in producing and delivering a final product, from the supplier’s supplier to the customer’s customer. Four basic processes – plan, source, make, deliver – broadly define these efforts, which include managing supply and demand, sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, and delivery to the customer.” Quinn (1997) defines the supply chain as “all of those activities associated with moving goods from the raw-materials stage through to the end user. This includes sourcing and procurement, production scheduling, order processing, inventory management, transportation, warehousing, and customer service. Importantly, it also embodies the information systems so necessary to monitor all of those activities.” In addition to defining the supply chain, several authors have further defined the concept of supply chain management. As defined by Ellram and Cooper (1993), supply chain management is “an integrating philosophy to manage the total flow of a distribution channel from supplier to ultimate customer”. Monczka and Morgan (1997) state that “integrated supply chain management is about going from the external customer and then managing all the processes that are needed to provide the customer with value in a horizontal way”. They believe that supply chains, not firms, compete and that those who will be the strongest competitors are those that “can provide management and leadership to the fully integrated supply chain including external customer as well as prime suppliers, their suppliers, and their suppliers’ suppliers”. From these definitions, a summary definition of the supply chain can be stated as: all the activities involved in delivering a product from raw material through to the customer including sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, delivery to the customer, and the information systems necessary to monitor all of these activities. Supply chain management coordinates and integrates all of these activities into a seamless process. It links all of the partners in the chain including departments", "title": "" }, { "docid": "3a27da34a0b2534d121f44bc34085c52", "text": "In recent years both practitioners and academics have shown an increasing interest in the assessment of marketing -performance. This paper explores the metrics that firms select and some reasons for those choices. Our data are drawn from two UK studies. The first reports practitioner usage by the main metrics categories (consumer behaviour and intermediate, trade customer, competitor, accounting and innovativeness). The second considers which individual metrics are seen as the most important and whether that differs by sector. The role of brand equity in performance assessment and top", "title": "" }, { "docid": "ee23ef5c3f266008e0d5eeca3bbc6e97", "text": "We use variation at a set of eight human Y chromosome microsatellite loci to investigate the demographic history of the Y chromosome. Instead of assuming a population of constant size, as in most of the previous work on the Y chromosome, we consider a model which permits a period of recent population growth. We show that for most of the populations in our sample this model fits the data far better than a model with no growth. We estimate the demographic parameters of this model for each population and also the time to the most recent common ancestor. Since there is some uncertainty about the details of the microsatellite mutation process, we consider several plausible mutation schemes and estimate the variance in mutation size simultaneously with the demographic parameters of interest. Our finding of a recent common ancestor (probably in the last 120,000 years), coupled with a strong signal of demographic expansion in all populations, suggests either a recent human expansion from a small ancestral population, or natural selection acting on the Y chromosome.", "title": "" } ]
scidocsrr